Facial Recognition-- Best Practice

Question about facial recognition for tagging, Lets say you have a person who has baby pictures, child pictures, teenager, and adult pictures.
should you tell PE that these are different people (john infant, john child, john teenager, john adult) to make the searching more accurate. (then you can combine them manually)
Or, should you tell PE that these are the same people  John, and it can figure out the changes without any confusion.

Good questions.  Annecdotally, PSE automatically recognized a photo of me when I was 1, after I tagged a number of photos of me as an adult.  Don't know how representative that is, but I would bet that it's not going to recognize me as a teenager!
As for technical references, I'm not aware of any that Adobe has published.  It appears that Adobe has licensed the technology from Cognitec:
http://www.cognitec-systems.de/News-Details.61+M58e94ecfeec.0.html
If you're a programmer, you can read the Cognitec SDK manual:
http://www.cognitec-systems.de/fileadmin/cognitec/media/products/FaceVACS-SDK/FaceVACS-SDK _Manual.pdf
An interesting nugget:
Facial pose requirements 
While a frontal view of the upright face is recommended, some deviations are admissible. A horizontal (yaw) or nose up/down (pitch) rotation of the head of up to 15 degrees is admissible. Also, the face is admitted to deviate from upright view (roll) up to 15 degrees.

Similar Messages

  • LRCC Face recognition - best practices?

    Ok so we are all new to the wonderful world of face recognition in LR.  I'm trying to work out what would be the best practices for using this.
    A little bit of background - I have a catalog of over 200,000 images.  In addition to portrait and wedding clients, a significant part of my work is with models and another significant part of my work is theatre photography.  I have be wanting some sort of face recognition to help with both for some time.
    What are your namining conventions for people? - here's mine:
    Ideally I would label people as "surname, firstname" so that I can keep members of a family together in "named people" display, but commas are not allowed in names.  Also the professional name of many models doesn't fit that pattern eg "Strawberry Venom" or "Cute as Sin" are to models I have worked with.
    I am trying to come up with a sensible naming convention at the moment it is "Surname/ Firstname" for clients, theatre folk and friends/family.  Models are still a problem, at present I am thinking of "Surname/ Firstname (model name(s))"  While I may not be able to remember the real names of models, I do usually know the names from model releases.  This naming will still permit me to filter/find them in the keyword List panel by just entering the model name.
    On final addition I am making to this this naming convention is the use of a hashtag suffix to the name:  #F for friends and family, #C for clients, #T for theatre/actors and #M for models.  This enables me to filter on just models, or just actors, or just friends and familiy.  Where people fall into multiple categories I add multiple hashtags.  So photos of me would be keyworded with "Butterfield/ Ian #F #T"
    Unknown / unidentified people.
    What I am not yet certain about is how to handle unknown / unidentified people.  Unidentified people fall into a number of different categories.
    People I don't know and I am never likely to know (Eg random strangers on the street, local tour guides on holiday, random people in the background etc)
    This group is relatively easy to deal with - that is to simply delete the face recognition, End of story.
    People I don't know the names of yet but I am likely to find out (Eg actors in a production for which I don't have a programme)
    For these people I am making up a unique name using the format "date/ Context-Gendernn" Eg an unknown male actor at Stockport Garrick Theatre would be named as "20150313/ SGT-M01"  Although this may appear a complex solution it has a number of advantages.  If/when I do learn the name of the individual (Eg I photograph them in a different production) it is simply a case of renaming the people keyword.  Creating a unique name and not simple assigning all unknowns to a bucket name will help the face recognition algorithms find this person without it being confused by have different faces assigned to the same name. I am also using the hashtage #U to make it easier to filter the unknown faces when I need to.
    People I don't know the names of and there is only a slim possibility of meeting/photographing again (Eg guests as a client weeding)
    It feels as though I out to just delete the face recognition and have done with it, and this is what I would do except for thing. Other than manually drawing face regions I have not yet found a way to get lightroom to rescan a folder for faces if you have previously deleted the face recognition.  This means that deleting face regions from a large number of people is something that cannot be easily reversed.  I might just leave these people in the "Unnamed People" category... at lease until such time as there is a way to rescan a folder or colectoin.
    Summary
    My practices are still evolving. But I hope these thoughts and idea will help others think through the issues and come up with solutions that work for their situation.  I am interested in hearing how other people are using the face recognition system.  Especially if anyone is aware of any 'best practices' that Adobe or anyone else has recommended.

    Glad it helped.
    Yes and no.  You can still put the people keywords into hierarchies within the keyword list - you can arrange them just like any other keywords.so you just create a "smith family" keyword and store "john smith" under it.  What you can't do is apply BOTH smith family and john smith the the same face.
    My use of the hash tags came about because I initially had a top level keyword for models, one for clients, one for theatre peple and one for family and firends.  Then discovered that some of the theatre folk were also clients (headshots) and what to do when a friend is also a client.  So the hash tag system means a person can be both a friend, a model, an actor as well as being a client!  (#T #C #M #F).

  • Best practice on how to handle employees who do not have a last name?

    We are a Canadian based company with some International employees. We have recently begun to enter the International employees into the HR module. This has led to some problems for employees from India who do not have both a first name and a last name as many of our downstream systems require both names.
    I'm wondering what other companies with International employees have done in this circumstance. Can someone recommend a Best Practice?  We want to ensure that whatever we do is not offensive to anyone.
    Thanks.

    Dear,
    Indian names vary from region to region. Sometimes Names also influence by religion and caste. Different languages spoken in India in different regions. This variety makes confusing differences in names and their styles.
    Now come to the point, since you are international company, while entering the names of your international employees - i would like to suggest to consider the employees names as mentioned in their passport (If they hold valid passport). In case of non availability of passports consider their bank information or any other available information so that they didnt face any further problems like visa, banking transactions etc etc.
    1. Maddepalli Venkata Ramana Rao
    In this case Maddepalli will be his surname, Venkata Ramana can be his first name and Rao can be mentioned as Second / last name.
    2. Hardev Singh
    In this case you didnt find a surname... Singh will be considered as Surname or his ethinic recognition. In this case you can enter Hardev as First name and Singh as last name.
    Make some entry fields are optional depending on the situation. Take help of an Indian origin employees help exists in your office.
    Regards,
    Syed Hussain.

  • Best practices: migrating from Aperture 2 to Aperture 3

    What are the best practices for moving from Aperture 2 to Aperture 3. One thing I do know from reading the discussions board is to turn off Faces recognition until everything is working. What else?

    Make sure you do a full backup and rebuild on the aperture 2 library before you migrate. (opt-cmd when you open Aperture 2)
    Aperture 3 may slow down on you don't be scared it will finish. Mine took 1 1-/2 days, about 30,000 raw photos / 800GB.
    Don't reprocess masters until Aperture is done with everything it needs to do, keep an eye on the activity window.
    After you reprocess masters it will take a long time to generate thumbnails, again don't worry.
    Bill Debevc
    sshaphotos.com

  • IPhoto 8.1.1 Fixes Facial Recognition Problems

    That's what I have observed from my testing on a library of over 15,000 pictures.
    It is important to "Detect Missing Faces" as explained in the appended support article:
    http://support.apple.com/kb/HT3941
    Your computer will work long and hard at this task ... best to leave it alone while you are having Thanksgiving dinner! When done, quit iPhoto and expect it to save a very large file describing all the faces it recognized. In particular, the 'faces' files in the library will be much larger, presumably because they now contain much more information.
    The recommended procedure to identify a few faces manually and then let iPhoto find the rest automatically is unchanged, but what a difference in performance! Expect iPhoto to immediately select many pictures that "look like" the faces you found manually. Also, expect to have to confirm or reject many of the selections ... it seems that iPhoto thinks "everyone looks like everyone else" at first, but it gets better as you improve its training. Also, expect the facial recognition process to occur while you are training by accepting or rejecting selections ... don't be surprised if certain selections are suddenly 'withdrawn' from consideration as you accept or reject others.
    You should also expect to see the "Is This ...?" pop-up much more frequently. I observed that as soon as I named one picture of a person in multiple photographs they were immediately identified in successive photographs, especially if the photographs were taken at the same event. Of course, the real test will be if iPhoto's facial recognition can recognize a person as they change over time, especially when they are children.
    There are still a few problems. If you have a large number of photographs of the same person, iPhoto will select them all ... and more. There needs to be an easier way to confirm or reject the selections rather than laboriously clicking on each. Expect family members to be erroneously suggested as matches, especially children with their parents and grandparents ... what else would you expect? Glasses are still a problem, especially sun glasses, but it is surprising how well iPhoto can "see through" such an obvious problem. Finally, a full face, even in profile is necessary for reliable identification ... putting a hand over the mouth or wearing a hat over the eyes defeats the identification process.
    iPhoton

    Thank you, Henk ... that's also how it works for me. It seems that the 8.1.1 update shifted more of the work load to the initial search for faces, i.e., "Detect Missing Faces". However, it still takes a little time to match each face, particularly if there are many faces. The big difference is that a little time is a lot better than an seemingly-infinite amount of time, which was how it used to work ... at least it seemed that way.
    The library on which I have done most of my testing is mostly "people pictures" ... school classes, family gatherings, birthday parties and sporting events. One of the sporting events is a professional baseball game photographed from behind the first base dugout at the level of the playing field. Pictures of the batters at home plate show hundreds of eager fans behind the third base dugout intently watching from the other side of the field. As the pictures were taken with a telephoto lens, all those faces are not only in the field of view, but also clearly in focus ... iPhoto finds them all! Not that I know them, much less are interested in naming them, but those hundreds of faces take time to be found and there are many pictures like that.
    Thanks also for the tip to use the Alt/Option key. I can use that for exactly this situation ... ignoring the faces of people I am not interested in recognizing, much less naming.
    iPhoton

  • Best practicies exposing AM (OAF 11.5.10) as webservice to external systems

    IHAC how is developing extensions to there ebusiness install base using OAF 11.5.10 and they have approached me with questions on how they could expose some of the business services developed (AM VO mainly) as webservices to be used in a BPEL/Webservice framework. The BPEL service is seebeyond (not sure how it is spelled) and not Oracle's.
    I have outlined 2 ways, but since I have not developed anything on OAF I have no idea it is possible.
    First was to migrate the ADF BC (or BC4J) projects from OAF with JDeveloper 10.1.3 and just more or less create a simple facade layer of a session bean right-click and deploy as webservice.
    Second: was to use a webservice library such as axis to be used in JServ directly to expose them on the "target" server.
    Has anyone any best practicies on this topic,

    For recognition and stable functionality of USB devices, your B&W G3 should be running OS 8.6 minimally. The downloadable OS 8.6 Update can be run on systems running OS 8.5/8.5.1. If (after updating to 8.6), your flash drive still isn't recognized, I'd recommend downloading the OS 9.1 Update for the purpose of extracting its newer USB support drivers, using the downloadable utility "TomeViewer." These OS 9.1 USB support files can be extracted directly to your OS 8.6 Extensions folder and are fully compatible with the slightly older OS software. It worked for me, when OS 8.6's USB support files lacked a broad enough database to support my first USB flash drive.

  • What is the best practices workflow from illustrator to flash in cs 6?

    Hello- question from an illustrator to flash workflow n00b.
    I'm creating a flash experience with animated characters.
    I'm building my art assets in illustrator.
    I've found it relatively easy & straightforward to import assets from illustrator into flash.
    I can conveniently import my layers and symbols from illustrator as separate layers and symbols in flash and manipulate them.
    However, let's say after import I want to change something with an asset using illustrator. For example, I have a character with facial features, including eyes. I want to change the eyes in illustrator and have those changes then be propogated in some way- automagically or manually- in flash.
    What is the right way to do this? Are there best practices for being able to go back and forth between flash and illustrator?
    Thank you!
    ~ sae

    Thanks for the reply Suhas.
    I guess the answer to my question is that there is no good way. In the photoshop/illustrator workflow, I can paste a document from illustrator into photoshop and then double-click it in photoshop to open it back up in illustrator and edit the original. I have seen posts on people doing this between Flash and Illustrator with the now-discontinued Flash Catalyst- a process that was apparently called round-tripping. Ideally, this is the functionality I'd like to have between Flash and Illustrator but I take it from your reply that the answer is 'Does not exist.'
    ~ sae

  • Lightroom 5 is out.... IS THERE FACIAL RECOGNITION?

    So it seems lighroom 5 is leased.
    http://www.adobe.com/products/photoshop-lightroom.edu.html
    Now to the question.. IS FACIAL RECOGNITION FINALLY IN LIGHTROOM?

    Keith_Reeder wrote:
    massive price difference between Lr and Cap One Pro
    C1+MP is way more expensive than Lr, but Lr+Ps is way more expensive than C1...
    But we're only talking about a few hundred dollars here, and for people paying thousands for bodies and lenses, a few hundred dollars difference in software shouldn't be a deal maker/breaker.
    So it really boils down to which is best, in my opinion.
    For me, I will be having Ps as part of CC, and I'm inextricably tied to Lr plugins, so wild horses couldn't drag me over to the C1 camp. However, if I didn't have such pre-disposing factors, I'd seriously consider C1+MP over Lr+Ps. On the other hand, if the choice is Lr+Ps vs. C1+MP+Ps then the balance tips again... - glad I don't have to make such decision.
    Don't get me wrong, all things considered, I'd still choose Lr/Acr over C1-based raw processing, since its the best (imo), and it runs well for me, but if I couldn't get it to run well,  I'd jump ship in a heart-beat. And, it does seem like PhaseOne is investing heavily in software improvements - more so than Adobe (w.r.t. Lightroom) - if this keeps up, then it's only a matter of time before C1 surpasses  Lr.
    Cheers,
    Rob

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best practice on sqlite for games?

    Hi Everyone, I'm new to building games/apps, so I apologize if this question is redundant...
    I am developing a couple games for Android/iOS, and was initially using a regular (un-encrypted) sqlite database. I need to populate the database with a lot of info for the games, such as levels, store items, etc. Originally, I was creating the database with SQL Manager (Firefox) and then when I install a game on a device, it would copy that pre-populated database to the device. However, if someone was able to access that app's database, they could feasibly add unlimited coins to their account, unlock every level, etc.
    So I have a few questions:
    First, can someone access that data in an APK/IPA app once downloaded from the app store, or is the method I've been using above secure and good practice?
    Second, is the best solution to go with an encrypted database? I know Adobe Air has the built-in support for that, and I have the perfect article on how to create it (Ten tips for building better Adobe AIR applications | Adobe Developer Connection) but I would like the expert community opinion on this.
    Now, if the answer is to go with encrypted, that's great - but, in doing so, is it possible to still use the copy function at the beginning or do I need to include all of the script to create the database tables and then populate them with everything? That will be quite a bit of script to handle the initial setup, and if the user was to abandon the app halfway through that population, it might mess things up.
    Any thoughts / best practice / recommendations are very appreciated. Thank you!

    I'll just post my own reply to this.
    What I ended up doing, was creating the script that self-creates the database and then populates the tables (as unencrypted... the encryption portion is commented out until store publishing). It's a tremendous amount of code, completely repetitive with the exception of the values I'm entering, but you can't do an insert loop or multi-line insert statement in AIR's SQLite so the best move is to create everything line by line.
    This creates the database, and since it's not encrypted, it can be tested using Firefox's SQLite manager or some other database program. Once you're ready for deployment to the app stores, you simply modify the above set to use encryption instead of the unencrypted method used for testing.
    So far this has worked best for me. If anyone needs some example code, let me know and I can post it.

Maybe you are looking for