ISL best practice

Hello All,
I have 2 MDS 9509 sitting across two sites on a same fabric with Intersite DWDN link = 1 Gbps
I have 48 port blades and 12 port blades.
Since the ISL max theoritical bandwidth is 1 Gbps, i think of locking the switch ISL ports at 1 Gbps (End to End )at both end
=========================================
Doubt : 1
Is it a good idea to utlise the 48 port blades for this ISL
OR
Use the costly 12 Port blades for this ISLs
=========================================
Doubt : 2
I want 16 Gbps across max speed
is it a good idea to go with
2Gbps X 8 ISLs or 1 Gbps X 16 ISLs
=========================================
Doubt : 3
These ISL are primarily used for replication traffic (True copy)
IS it a good idea to Bundle the links for
1. load balancing
2. Line card failure / replacement
=========================================
Doubt : 4
If i go with Port channel
is it a good idea to go with Exchange based loadblancing?
=========================================
Doubt : 5
Since the link is used for replicaton
is it a good idea to switch the IOD(In order delivery ) : OFF
=========================================
Please help .....
Thanks in Advance :):)
Cheers
Krish

Hey thanks for this quick response ...
Some more doubts in mind ... :)
Doubt : 6
What shoulb be the B2B credit set for the port for a distance of 130 Kms, do i need to increase the B2B or leave it defalut.
========================================================
Doubt : 7
I am planning to test various frame sizes across ISL.. What impact will it have on the latency
=======================================================
Doubt : 8
Since End to End speed is only 1 Gbps, is it a good idea to lock even the Storage replication ports to 1 Gbps End to End
========================================================

Similar Messages

  • Best Practice on trunking VSAN 1

    Hello all
    I'd appreciate feedback on wether this is looked upon as good practice or not.
    For all the Cisco SAN implementations I have done to date, I have always trunked VSAN1 (but obviously NOT used it for customer data).  I do this for a couple of reasons.
    1.  It is a good test for an ISL, you can initially trunk VSAN1 to be 100% all is OK, before affecting customer VSAN's
    2.  Fabric manager is not "erroring" by reporting segmented VSAN's
    What do the rest of you do?  Is there a Cisco best practice on this?
    Thanks
    Steven

    Steven,
    CFS stands for Cisco Fabric Services. It can be used to distribute configuration information between MDS switches to keep the configuration consistent. It  can be used for various things like NTP settings, Syslog config, call home config etc.
    You can find more information in the MDS documentation on CCO. See here for example:
    http://www.cisco.com/en/US/docs/switches/datacenter/mds9000/sw/5_0/configuration/guides/sysmgnt/nxos/cfs.html
    CFS uses VSAN 1.
    As for best practices, there is a document also on CCO:
    http://www.cisco.com/en/US/prod/collateral/ps4159/ps6409/ps5990/white_paper_C11-515630.html
    This talks a bit about VSAN 1. I can tell you that I have installed lots of MDS SANs during the past 9 years and it is one of the things I’d do from experience. Use VSAN 1 for management purposes like CFS and put your real production traffic in other VSANs.
    Ralf

  • SINGLE 9513 with A&B 9216 best practice....

    We have a couple 9216's (A&B side - VSAN numbers identical) and have reached our port count limit. We purchased one 9513 with two 48 port FC modules and one X9304-18k4 line rate card (18FC/4IP). The goal is to make the single 9513 our principle switch ISL'ed to the 9216's.
    Is having one 9513 a good decision? Our thoughts were that it has so much redundancy built in and the performance is far beyond what we need we could just use one. I have never had anything but an A&B side of the fabric so wrapping my head around this and trying to keep it within best practice is proving difficult.
    The 9216A&B VSANS numbers are identical. When we ISL the zones will merge thereby collapsing the A&B sides together. I really don't want to do that (would essentially become a meshed fabric right). I am thinking that I need to rename the VSAN's on the B side to not match A. Maybe make the even numbered VSAN the A side and the odd numbered VSAN the B side. That way I can keep the ISL's independent from each other and prevent the zones from collapsing into each other.
    Also keep in mind that we may (eventually) purchase another 9513 and just move one of the FC48 modules over to separate the A&B side again at a later date. I want to keep this as flexible as possible in case that does happen.
    Thoughts - comments - suggestions all welcome! Just please be nice. I am learning....

    I think you are right on. The single 9513 can not have duplicate VSANs for the A and B 9216s. The odd/even idea makes the most sense. This way you can leverage the 9513 redundancy. You can match the same VSANs that exist in the A fabric, and only permit those VSANs across the ISLs to the A 9216. You will have to renumber the VSANs on the B 9216 and then match them on the 9513 and again, only permit those VSANs on the ISLs to the B 9216.
    Things to keep in mind if you re-number the VSANs on the B 9216. If you match the domain numbers used on the corresponding A 9216 VSANs, and make them static, even if someone cross connects a cable the VSANs will not merge since there is domain confict. If you change the domain from the current one in use, hosts like AIX and HP-UX will get a new FCID. You may have to rescan the host to resolve the lun bindings.
    If you have AIX and HP-UX, you may want to ensure that the target devices they use, get the same FCID after the VSAN renumber to avoid having to perform the rescan. (this may prevent matching the same domains used on the A 9216).
    Hope this helps,
    Mike

  • Best Practice - Flexpod Design

    I am working thru a 5548, UCS, and Netapp design. We are using FC, not FCoE. I have followed the FlexPod deployment standard to a "T" but have a couple of questions. First, as we are following our physical layout, EoR, we are placing a pair (two 5548's) at the end of each row to handle FC within that row (client request). We have various FC devices throughout each row, with UCS in one row, Netapp in another, and so forth. The question I have is in regards to "best practice" with the FlexPod standard. No where have I found an FlexPod design document which shows a cascade/aggregation design using an EoR switch connected to another EoR switch with a target/initiator seperated by two 5548s (NPIV/NPV). Is such a design NOT recommended? Can it be done within the standard? The second question is in regards to actual configuration. In this mode, TARGET ---- 5548(row1)-----5548(row2)---- Initiator, I assume the first 5548 is NPV mode, the second NPIV mode. Correct?
    We have not implemented in this fashion before so I am looking for some standards document/configurations,etc related to this. Your help is greatly appreciated...

    The link between the NPV-NPIV Core is not an ISL.
    The link between the NPV-NPIV Core is  F-port type. NPV Switch does not run Fibre Channel services, therefore has NO Fibre Channel Domain ID. 
    NP - Node Proxy port type is introduced on the NPV Switch since it sends requests to the NPIV Core for processing and then relays any applicable information to the downstream hosts.
    As far as FLEXPOD this Doc talks about 5548 in NPIV with UCS in NPV mode.
    http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/whitepaper__c07-727095.html
    This might not a full match but it touches the features you are discussing.
    I hope this helps.
    Regards,
    Carlos

  • Port Channel Best Practice

    Hi All,
    I have a MDS9509 with port channels going to my Cisco blade switches on my HP Proliant blade enclosure.
    I have NO ports left on my MDS9509, but DO have some remaining on the blade enclosure.
    The question is, can i port channel from the blade enclosure to another edge switch (MDS9148)?
    Is that a supported configuration/Best Practice and what are the ramifications if I do that?
    So I'm going from Core, to edge and then to edge switch with port channel.
    Thanks,
    Matt

    Hi Matthew,
    Sorry for the misunderstanding,  your to-be diagram cleared up a lot for me :-)
    First off, yes, it will work. There's no reason it shouldn't and if you have the external ports free on your 9124e, you can hook up a new switch.
    It's far from a conventional design, because blade switches are supposed to go in the Edge. It's not a best practice.
    What I would recommend is that you move some of the storage from your edge to the 9148, and treat it as a collapsed core, sharing an edge switch (the blade switch).  You can then ISL the 9148 and the 9509 together into a somewhat sensible topology.
    So for one fabric this would be
    (disk)---9148  --- 9509 -- (disks) (some moved to the left to free up space for ISLs)
                    9124e
    Or you can contact your sales team and look to swap some Linecards with higher port density ones.
    Lastly I would like to note that, however you link up the switches, most combinations available to you will 'work'.  So as a temp solution you can go ahead with the (core - blade - edge) scenario.  Just know that you'll be introducing bottlenecks and potential weak points into your network. 

  • Best Practices for multi-switch MDS 9124 Impelementations

    Hi,
    I was wondering if anyone had any links to best-practices guides, or any experience, building mutli-swtich fabrics with the Cisco MDS 9124 or similar (small) switches? I've read most of the FibreChannel books out there and they all seem pretty heavy on theory and FibreChannel protocol operations but lack when it comes to real-world deployment scenarios. Something akin to the Case Studies sections a lot of the CCIE literature has, but anything would be appreciated.
    Regards,
    Meredith Shaebanyan

    Hi Meridith
    www.Whitepapers.zdnet.com has links to good reading. It has links to items like:
    http://www.vmware.com/pdf/esx_san_cfg_technote.pdf is probably a typical SAN environment these days. It's basic and just put your 9124's in where the switches are.
    http://www.sun.com/bigadmin/features/hub_articles/san_fundamentals.pdf is for bigger SANs such as DR, etc.
    Things to consider with 9124's are:
    They can break so keep a good current backup on a tftp/ftp/scp server.
    Consider that if you have all the ports used, the two 8 port licences are not going to work on a replacement switch as they are bound to your hostid. The vendor that sold the switch should be able to get replacements quickly but you will lose time with them.
    Know exactly what the snmpserver command does as if you have your 9124 replaced and you load your backup config and you use Fabric Manager, it won't be able to manage the 9124 unless you change the admin password with snmpserver.
    9124/9134's don't have enough Buffer Credits to expand beyond about 10 km.
    Any ISL's used between switches should always be at least two and use Port Channels where possible.
    The 9124 or 9124e or 9134 are great value based switches. I keep a spare for training and emergencies. We use them in a core/edge solution and I am very satisfied with them. I have only had one failure with Cisco switches in the last 5 years and it was a 9140 that sat around for far too long doing nothing. The spare meant we were up and running in 30 minutes from the time we noticed the failure and got to the data centre. As there were two paths, no one actually noticed anything. My management system alerted me.
    Remember to make absolutely sure that any servers attached to the SAN have multipathing software. The storage array vendors (HDS, EMC, etc) can sell you the software such as HDLM or Powerpath. You can use an independent solution such as Veritas DMP. Just don't forget to use it.
    Follow the guidelines in the two documents and get some training as the MDS training is very good indeed. 5 days training and you will be confident about what to do in any sized SAN including Brocade and McData.
    A small SAN is just as satisfying as a large one. If in doubt, get a consultant to tell you what to do.
    Is that what you was after? I hope it was not too simple.
    Stephen

  • IVR (Inter-VSAN Routing) - Best practices questions

    Hi there,
    We have a situation where we will have multiple customers hosted on a 9513 and sharing a single storage array.
    We want to keep the logically seperated in their own VSANs, but of course the storage array will need to be zoned to all the customers hosts.
    Now IVR should be the thing to use, but I'm getting resistance from the local team (screams of "Nooooo!!! They're EVILLLLL!!!") ... so I want to find out if there are some best practices around IVR use ?
    Should they be used only for light duty stuff ? (though at present we use them with tape backup, which isn't exactly "light")
    Do they impact performance to a measurable degree ?
    Are they stable ?
    What can go wrong with them ? And does it happen often ?
    Thanks!

    IVR does not impact application I/O performance because all the vsan rewrite and fcid rewrite actions are done in hardware asics. The IVR process on supervisor is responsible for managing the configuration and ensuring the rewrite tables are programmed in the linecards. The process is stable.
    Most of the issues I have seen are in environments with multiple IVR enabled MDS switches ISL'd together or an MDS IVR enabled switch connected to a McData/Brocade in an interop mode.
    Like any feature there have been bugs and it pays to check the SAN-OS release notes when planning installs. For example, a config change on one switch does not get properly pushed to another IVR switch or a forwarding table for an ISL interface does not get correctly programmed. There have also been a fair share of user misconfigurations which could have been avoided if Cisco Fabric Services (CFS) was enabled for IVR. This is done with the 'ivr distribute' command. Without this it is very difficult in large topologies of multiple IVR switches to ensure they have a consistent IVR config. In other cases there have been problems from a mix of IVR enabled switches running different releases of SAN-OS, e.g. mixing 3.0 with 3.2.
    Best practice is to have dual physical fabrics, upgrade one fabric at a time ensuring all IVR switches in a fabric run same SAN-OS release.
    A single IVR switch is much easier to implement. The MDS Configuration Guide has a list of best practices for IVR and one of those is to use the NAT option. Personally I would avoid NAT option where you can as NAT makes any troubleshooting harder trying to figure out the domain ID translations. You would also minimize risk of hitting some NAT related bugs, but you could also avoid most of these by checking the workarounds documented in the Release Notes. And with NAT you need to also configure persistent virtual domains and fcids to cater for AIX and HP-UX systems that cannot handle the FCID of the target changing whenever the exported virtual domain ID changes. To give NAT credit, each vsan is represented by a single virtual domain. In regular non-NAT mode, each switch in a vsan is represented by a virtual domain, meaning you eat up more virtual domain IDs. So in large topologies with many domain IDs there are scalability advantages to using NAT and the IVR updates between switches are more efficient with fewer virtual domain IDs to advertize. Of course, NAT must be used if merging physical fabrics with same domain ID and you cannot afford the downtime to change one of the switch domain IDs.
    However if it is just a single IVR switch I would avoid NAT. To do this all your domain IDs should be statically defined and there must be no overlapping domain IDs between IVR'd vsans. If it is a brand new install you can easily achieve this by specifying unique allowed domain ID ranges per vsan.
    For example, each customer can have their own vsan with say 10 domain IDs and the storage can be in vsan 2. You will only use one domain ID per vsan on day 1. Allowing 10 domain ids per vsan means you can add up to 9 other switches per vsan should you need to in the future. There is a maximum 239 domains per vsan so you could have up to 23 customers on your 9513 working with a range of 10 domain IDs per vsan.
    fcdomain domain 2 static vsan 2
    fcdomain domain 10 static vsan 10
    fcdomain domain 20 static vsan 20
    fcdomain domain 30 static vsan 30
    ..and so on..
    fcdomain domain 230 static vsan 230
    fcdomain allowed 01-09 vsan 2
    fcdomain allowed 10-19 vsan 10
    fcdomain allowed 20-29 vsan 20
    fcdomain allowed 30-39 vsan 30
    ..and so on..
    fcdomain allowed 230-239 vsan 230
    With or without IVR you should still run dual fabrics (e.g. 95xx in each fabric) and host based multipathing for redundancy.
    And don't forget IVR will require an Enterprise license. I have even seen a large outage because the customer forgot to install the license before the 120 day grace period expired.

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best practice on sqlite for games?

    Hi Everyone, I'm new to building games/apps, so I apologize if this question is redundant...
    I am developing a couple games for Android/iOS, and was initially using a regular (un-encrypted) sqlite database. I need to populate the database with a lot of info for the games, such as levels, store items, etc. Originally, I was creating the database with SQL Manager (Firefox) and then when I install a game on a device, it would copy that pre-populated database to the device. However, if someone was able to access that app's database, they could feasibly add unlimited coins to their account, unlock every level, etc.
    So I have a few questions:
    First, can someone access that data in an APK/IPA app once downloaded from the app store, or is the method I've been using above secure and good practice?
    Second, is the best solution to go with an encrypted database? I know Adobe Air has the built-in support for that, and I have the perfect article on how to create it (Ten tips for building better Adobe AIR applications | Adobe Developer Connection) but I would like the expert community opinion on this.
    Now, if the answer is to go with encrypted, that's great - but, in doing so, is it possible to still use the copy function at the beginning or do I need to include all of the script to create the database tables and then populate them with everything? That will be quite a bit of script to handle the initial setup, and if the user was to abandon the app halfway through that population, it might mess things up.
    Any thoughts / best practice / recommendations are very appreciated. Thank you!

    I'll just post my own reply to this.
    What I ended up doing, was creating the script that self-creates the database and then populates the tables (as unencrypted... the encryption portion is commented out until store publishing). It's a tremendous amount of code, completely repetitive with the exception of the values I'm entering, but you can't do an insert loop or multi-line insert statement in AIR's SQLite so the best move is to create everything line by line.
    This creates the database, and since it's not encrypted, it can be tested using Firefox's SQLite manager or some other database program. Once you're ready for deployment to the app stores, you simply modify the above set to use encryption instead of the unencrypted method used for testing.
    So far this has worked best for me. If anyone needs some example code, let me know and I can post it.

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

Maybe you are looking for

  • BSP  YHAP_DOCUMENT-working in BSP but not through portals

    Hi! I am using the BSP HAP_DOCUMENT to get column access for  columns in the appraisal document for 'further participant' role. for this i changed the mode in ydocuments_where_participated.htm to 'X' and also used the BADI HRHAP00_COL_ACCESS. in this

  • Usage of SAPBEX setVariables method?

    Dear all, I have a question regarding the usage of the SAPBEXsetVariables method which is part of the SAPBEX VBA-API. I want to invoke a BEx Workbook from a VBA-Macro. The query, embedded in this workbook, is using variables. (Some of the variables a

  • PLS HELP I NEED NET!

    Hello.  Im not much of a computer person. I moved from Vancouver to calgary. My laptop was working fine with my router before I moved.  I moved into a house that already had cable internet so I thought I go just plug my router in and go! Which has no

  • How can I select areas inside incomplete shapes?

    Hi, I'm often required to open PDF drawings outputted from AutoCAD and add colour to them. I do this by opening the PDF in Illustrator and drawing colour shapes in a layer underneath the CAD drawing. Illustrator is good at handling all of the lines f

  • /usr/bin/ld: cannot find -lR

    I can not compile rkward (http://rkward.sf.net). Error infomation here: source='helpdlg.cpp' object='helpdlg.o' libtool=no depfile='.deps/helpdlg.Po' tmpdepfile='.deps/helpdlg.TPo' depmode=gcc3 /bin/sh ../admin/depcomp g++ -DHAVE_CONFIG_H -I. -I. -I.