Need an efficient way of address comparisons

I am trying to find out a way to compare 2 addresses
e.g. 6841 Day Drive with 6841 Day Dr.. In this example Drive and Dr needs to be matched somehow
I read some forums which say split the text and then try to match the numerical portion and then street name.
Now imagine there are thousands of records to be processed. Definitely the above process is practically useless when processing so many records
Can anyone help me with giving some ideas how to implement that efficiently.
Thanks in advance,
Mandeep

JoachimSauer wrote:
mandy_m wrote:
I call it practically useless to split the string and do text comparisons considering the fact that there are many ways in which a user might write the address.
I gave an example of Drive - Dr, others may be Road - Rd, Street - St, Aprtment -Apt..... and the list is very longThat's why I said you need to canonicalize the data. Write a method (or class) that canonicalizes any given address. Then apply that to all data that you store (i.e. all your existing data). This might take some time, but only needs to be done once.Joachim's defnitely giving you good advice. If you have no control over the data coming in, this is definitely not a simple job; if you do, you could require people to put the 'type' of street and the building number separately from the street name (and don't forget that building numbers can be ranges if you're dealing with corporate addresses).
In England, a lot of people give names to their houses, which further muddies the waters as it is often the first line in the address (the same could be true of a corporate address: Company House, 200-250 Suchandsuch Rd....etc).
A few things to try (some have already been suggested):
1. Create an Address class based on the components that you know you want. At the very least, it should have 'equals()', 'hashCode()' and 'toString()' implemented.
2. Convert the input address to lowercase (or uppercase).
3. Create a dictionary of known street types (road, avenue, boulevard...etc). This could be a static HashSet in your Address class.
4. Create a list of known abbreviations (apt, rd, dr...). I'd suggest you include versions with and without punctuation.
5. Expand all abbreviations in your input address.
6. Remove all remaining punctuation, but keep the original line orientation (this might involve splitting your address string into 'lines' based on commas for example).
7. Look for a line that contains one of your known 'street types'. This is highly likely to be the actual street address.
8. Extract the number (or numbers, in the case of a range). Watch out for things like '221b baker street'.
9. Load the resulting components into your Address variables.
The fact of the matter is that even with all of the above, you will probably not have a 100% solution. Personally, I'd reject all input addresses that are so badly written that you can't determine required components for your class, but I'm an ornery cuss...
Apart from that, you could check out the postal service website where you live to see if they have any guidelines for parsing addresses. The GPO here used to have a great page for that.
Good luck. What you want is not simple.
Winston

Similar Messages

  • Need an efficient way to write history record

    I need to keep the old images of the records in table A after any changes made by the user. So, I create a history table B which is exactly the same as A but have two more
    columns to store the the SYSDATE & USER.
    Currently, my program uses a cursor to loop thru records in A in order to insert every record to B with SYSDATE & USER and then delete the record in A.
    Any better method to deal with this ?
    null

    Hi,
    You can write a UPDATE trigger on A to write the record to B.

  • Advice needed: Efficient way to scan users

    Hi all,
    I wish to know the efficient way to scan users in Lighthouse. I need to write a workflow that checkout all the users and perform some updates. This workflow should run everyday at midnight.
    I have created a scanner myself. Basically what It did are:
    1. call FormUtils.getUsers method to return all users' name into a variable.
    2. loop through this list and call a subprocess workflow to process every user. This subprocess checks out a user view, performs updates, and then checks in view.
    This solution is not efficient at all since it causes my JVM to be Out of Memory. (1G RAM assigned to JVM with about 78,000 users)
    Any advice is highly appreciated. Thank you.
    Steve

    Ok...I know understand what you are doing and why you need this.
    A long, long, long time ago (back in 3.x days) the deferred task scanner was really bad. Its nightly scan would scan ALL users each time. This is fine when your client had 4k users...but not when it has 140k users.
    Additionally, the "set deferred task" function had problems with two tasks with the same name "i.e. disable resource" since it used the name as the xml object name which can not be duplicated.
    soooo, to beat this I rewrote the deferred task handler to allow me to do all of this. Part of this was to add a searchable field called 'nextTaskDate' on the user object. After each workflow this 'date" is updated so it is always correctly populated with the users "next deferred task date"
    each night the scanner runs and querys all users with a nextTaskDate of today. This then gives us a result set that we can iterate over instead of having to list each user and search for tasks. It's a billion times faster.
    Your best bet is to store the task date in miliseconds and make your query a "all users with next task date BEFORE now"...this way if the server is hosed you can execute tasks you may have missed.
    We have an entire re-usable implmentation framework that we have patented (of which this code is a part) that answers most of these types of issues you are bringing up. It makes these implementations much much simpler, faster, scalable and maintainable.
    this make sense?
    Dana Reed
    AegisUSA
    Denver, CO 80211
    [email protected]
    773.412.3782
    "Now hiring best-in-class IdM architects. Inquire via emai"

  • Icloud still shows my old email address when I try to enter, asking me for a password I no longer have. I need to change my email address and password in ICLOUD but I can't find a way to do it. Any ideas? Thanks.

    icloud still shows my old email address when I try to enter, asking me for a password I no longer have. I need to change my email address and password in ICLOUD but I can't find a way to do it. Any ideas? Thanks.

    Logout (System Preferences>iCloud) then login with the new address and password.

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • Most efficient way to get a  connection from a defined connection -pool [whole message]

    Having recently load-tested the application we are developing I noticed that
    one of the most expensive (time-wise) calls was my fetch of a db-connection
    from the defined db-pool. At present I fetch my connections using :
    private Connection getConnection() throws SQLException {
    try {
    Context jndiCntx = new InitialContext();
    DataSource ds =
    (DataSource)
    jndiCntx.lookup("java:comp/env/jdbc/txDatasource");
    return ds.getConnection();
    } catch (NamingException ne) {
    myLog.error(this.makeSQLInsertable("getConnection - could not
    find connection"));
    throw new EJBException(ne);
    In other parts of the code, not developed by the same team, I've seen the
    same task accomplished by :
    private Connection getConnection() throws SQLException {
    return DriverManager.getConnection("jdbc:weblogic:jts:FTPool");
    From the performance-measurements I made the latter seems to be much more
    efficient (time-wise). To give you some metrics:
    The first version took a total of 75724ms for a total of 7224 calls which
    gives ~ 11ms/call
    The second version took a total of 8127ms for 11662 calls which gives
    ~0,7ms/call
    I'm no JDBC guru som i'm probably missing something vital here. One
    suspicion I have is that the second call first find the jdbc-pool and after
    that makes the very same (DataSource)
    jndiCntx.lookup("java:comp/env/jdbc/txDatasource") in order to fetch the
    actual connection anyway. If that is true then my comparison is plain wrong
    since one call is part of the second. If not, then the second version sure
    seems a lot faster.
    Apart from the obvious performance-differences in the two above approaches,
    is there any other difference one should be aware of (transaction-context
    for instance) between the two ? Basically I'm working in an EJB-environment
    on weblogic 7.0 and looking for the most efficient way to get hold of a
    db-connection in code. Comments anyone ?
    //Linus Nikander - [email protected]

    Linus Nikander wrote:
    Thank you for both your replies. As per your suggestions I've improved my
    connectionhandling (I ended up implementing the Service Locator pattern as a
    matter of fact).
    One thing still puzzles me though. Which (and why) is the "proper" way to
    fetch the actual dataSource. As I stated before in the code I've seen two
    approaches within the code I've got.
    1. myDs = myServiceLocator.getDataSource("jdbc:weblogic:jts:FTPool");
    2. myDs = myServiceLocator.getDataSource("java:comp/env/jdbc/tgsDB");
    where getDataSource does a dataSource = (DataSource)
    initialContext.lookup(dataSourceName); dataSourceName being the input-string
    obviously.
    tgsDB is defined as
    <reference-descriptor>
    <resource-description>
    <res-ref-name>jdbc/tgsDB</res-ref-name>
    <jndi-name>tgs-dataSource</jndi-name>
    </resource-description>
    </reference-descriptor>
    in weblogic-ejb-jar.xml
    From what I can understand by your answer, you don't recommend using the
    JNDI-lookup way of getting the connection at all ?Correct.
    The service locator that
    I implemented will still perform a JNDI lookup, but only once. Will the fact
    that I'm talking to an RMI-object anyway significantly impact performance
    (when compared to you non-jndi-method) ?In some cases, for earlier 7.0s, maybe yes. For the very latest, it shouldn't
    hurt.
    >
    >
    In my two examples above. If i use version 1. How will the server know
    whether to give me a TX-bound connection and when not to ? In version 1
    FTPool maps to a pool with both TX and non-TX datasources. In version 2.
    tgsDB maps directly to a TX-dataSource.
    I might be asking a lot of strange questions, probably because I'm just
    getting the hang of all the resource-reference issues that EJBs are
    associated with.Bear with me ;)
    //Linus
    "Joseph Weinstein" <[email protected]> wrote in message
    news:[email protected]...
    Hi. As Jon said, the lookups are redundant. Because you showed that otherway,
    I will infer that this code is always being run in serverside code. Good.I will give you
    a third way which is much better than either of the ones you showed. Thefirst method
    you showed has a problem for all but the latest sps, your jdbc objectswill all be
    going through an unnecessary level of indirection because you are gettingan rmi jdbc
    object which talks to the jts driver object.
    The second, faster method you showed also has a serious problem! Oneshould
    never call DriverManager methods in multithreaded JDBC programs becauseall
    DriverManager calls are class-synchronized, including some small internalones like
    DriverManager.println(), which all JDBC drivers and even the constructorfor
    SQLException call, so one slow getConnection() call can inadvertantly haltall other
    JDBC being done in the whole JVM! Also, for JVMs that have lots of jdbcdrivers
    registered, DriverManager is inefficient because it simply sends your URLand
    properties to every driver it has registered until it finds one thatdoesn't throw an
    exception and returns a connection.
    Here's the fastest way:
    // do once and reuse driver object everywhere. Can be used by multiplethreads
    Driver d =(Driver)Class.forName("weblogic.jdbc.jts.Driver").newInstance();
    Then, whenever you want a connection:
    public myJDBCMethod()
    Connection c = null; // always a method level object
    try {
    c = d.connect("jdbc:weblogic:jts:FTPool", null);
    ... do all the jdbc for the method...
    c.close();
    c = null;
    catch (Exception e) {
    ... do whatever, if needed...
    finally {
    // close connection regardless of failure or exit path
    if (c != null) try {c.close();}catch (Exception ignore){}
    Joe
    Linus Nikander wrote:
    Having recently load-tested the application we are developing I noticed
    that
    one of the most expensive (time-wise) calls was my fetch of adb-connection
    from the defined db-pool. At present I fetch my connections using :
    private Connection getConnection() throws SQLException {
    try {
    Context jndiCntx = new InitialContext();
    DataSource ds =
    (DataSource)
    jndiCntx.lookup("java:comp/env/jdbc/txDatasource");
    return ds.getConnection();
    } catch (NamingException ne) {
    myLog.error(this.makeSQLInsertable("getConnection - couldnot
    find connection"));
    throw new EJBException(ne);
    In other parts of the code, not developed by the same team, I've seenthe
    same task accomplished by :
    private Connection getConnection() throws SQLException {
    return DriverManager.getConnection("jdbc:weblogic:jts:FTPool");
    From the performance-measurements I made the latter seems to be muchmore
    efficient (time-wise). To give you some metrics:
    The first version took a total of 75724ms for a total of 7224 callswhich
    gives ~ 11ms/call
    The second version took a total of 8127ms for 11662 calls which gives
    ~0,7ms/call
    I'm no JDBC guru som i'm probably missing something vital here. One
    suspicion I have is that the second call first find the jdbc-pool andafter
    that makes the very same (DataSource)
    jndiCntx.lookup("java:comp/env/jdbc/txDatasource") in order to fetch the
    actual connection anyway. If that is true then my comparison is plainwrong
    since one call is part of the second. If not, then the second versionsure
    seems a lot faster.
    Apart from the obvious performance-differences in the two aboveapproaches,
    is there any other difference one should be aware of(transaction-context
    for instance) between the two ? Basically I'm working in anEJB-environment
    on weblogic 7.0 and looking for the most efficient way to get hold of a
    db-connection in code. Comments anyone ?
    //Linus Nikander - [email protected]

  • Efficient way get FCE4 Log and Transfer to read .mts files stored on drive?

    Hi All
    I've searched the FCE discussion forum and not found an answer verified by more than one user to this question: What is an efficient way to get FCE4 (via the Log and Transfer window) to see .mts files from an AVCHD camera stored on a drive (NOT via the camera -- directly from the drive)?
    I am trying to plan the most space-efficient system possible for storing un-transcoded .mts files from a Panasonic AG-HMC151 on a harddrive so that I can easily ingest them into FCE4. I am shooting a long project and I want to be able to look at .mts files so that I can decide which ones to transcode to AIC for the edit.
    Since FCE4 cannot see .mts files unless they have their metadata wrapper the question is really 'how do I most efficiently transfer .mts files from the camera to a storage harddrive with their metadata wrappers so that FCE4 can see them via the log and transfer window?'
    Nick Holmes, in a reply in this thread
    http://discussions.apple.com/thread.jspa?messageID=10423384&#10423384
    gives 2 options: Use the Disk Utility to make a disk image of the whole SD card, or copy the whole contents of the card to a folder. He says he prefers the first option because it makes sure everything on the card is copied.
    a) Have other FCE users done this successfully and been able to read the .mts files via Log and Transfer?
    In a response to this thread:
    http://discussions.apple.com/thread.jspa?messageID=10257620&#10257620
    wallybarthman gives a method for getting Log and Transfer to see .mts files that have been stored on a harddrive without their metadata wrappers by using Toast 9 or 10.
    b) Have any other FCE4 users used this method? Does it work well?
    c) Why is FCE4 unable to see .mts files without their metadata wrappers in the Log and Transfer window? Is it just a matter of writing a few lines of code?
    d) Is there an archiving / library app. on the market that would allow one to file / name / tag many .mts clips and view them prior to transcoding into space-hungry AIC files in FCE?
    Any/all help would be most gratefully received!

    I have saved the complete file structure on DVD as a backup, but have not needed to open them yet. But I will add this. As I understand the options with Toast you are infact converting the video to AIC or something like it. I haven't looked into it myself, but I can't imagine the extra files are that large, but maybe there are significant, I don't know. The transcoded files are huge in comparison to the AVCHD file.
    A new player on the scene for AVCHD is Clipwrap 2.0. As I understand this product. It rewraps the AVCHD into a wrapper the Quicktime can open and play. This is with the MTS files only, the rest of the file structure is not needed. The rewrap is much faster that the transcode to AIC. So you have the added benefit of being able to play the files as well as not storing the extra files. The 2.0 version (which is for AVCHD) was just recently released. I haven't tried it and don't personally know of anyone who has. You might want to try this, there is a trial version as I recall.

  • EFFICIENT way of escalating an open task

    I need to escalate TASKS that are still open after 31 days.
    I figure i need 2 workflows to do this.
    As i see it right now:
    1st WF. Waits for 31 days after the task has been created. On the 31st day it changes a read only field called "escalate" to YES.
    2nd WF checks for changes in tasks where: If (Status=OPEN AND escalate<>pre(escalate)) is true then send an escalete email or task.
    Is there a more efficient way of doing this?
    TIA
    Paul

    Is there a reason you want two worfklows? Why not put an e-mail action after the Wait on the same workflow? If you check the "Reevaluate Rule Conditions After Wait" checkbox on the Wait action, the workflow rule will be re-evaluated after your 31 days... so it would only send the e-mail message if the Task is still open (assuming your workflow condition is set to look at Status = Open).
    Chris

  • SQL query with multiple tables - what is the most efficient way?

    Hello I am learning PL/SQL. I have a simple procedure where I need to find number of employees and departments per location as per user input of location_id.
    I have 3 Tables:
    LOCATIONS
    location_id (pk)
    location_name
    DEPARTMENTS
    department_id (pk)
    location_id (fk)
    department_name
    EMPLOYEES
    employee_id (pk)
    department_id (fk)
    employee_name
    1 Location can have 0-MANY Departments
    1 Employee has 1 Department
    Here is the query I came up with for PL/SQL procedure:
    /*Ecount, Dcount are NUMBER variables */
    SELECT SUM (EmployeeCount), COUNT(DepartmentNumber)
         INTO Ecount, Dcount
         FROM     
         (SELECT COUNT(employee_id) EmployeeCount, department_id DepartmentNumber
              FROM employees
              GROUP BY department_id
              HAVING department_id IN
                        (SELECT department_id
                        FROM departments
                        WHERE location_id = userInput));
    I do get the correct result, but I am just wondering if my query is on the right track and if there is a more "efficient" way of doing this.
    Thanks in advance for helping a newbie out.

    Hi,
    Welcome to the forum!
    Something like this will be more efficient:
    SELECT    COUNT (employee_id)               AS ECount
    ,       COUNT (DISTINCT department_id)     AS DCount
    FROM       employees
    WHERE       department_id IN (     SELECT     department_id
                        FROM      departments
                        WHERE      location_id = :userInput
    ;You should also try a join instead of the IN subquery.
    For efficiency, do only the things you need to do.
    For example, you don't need a count of employees in each department, so don't compute one. That means you won't need the in-line view, so don't have one.
    You don't need PL/SQL for this job, so don't use PL/SQL if you don't have to. (I realize this question was out of context, so you may have good reasons for doing this in PL/SQL.)
    Do all filtering as early as possible. Don't waste effort computing things that won't be used .
    A particular example of this is: Never use a HAVING clause when you can use a WHERE clause. What's the difference between a WHERE clause and a HAVING clause? The WHERE clause is applied before aggregate functions are computed, and the HAVING clause is applied after; there's no other difference. Therefore, if the HAVING clause isn't referencing an aggregate function, it could be done in a WHERE clause instead.

  • A more efficient way to assure that a string value contains only numbers?

    Hi ,
    I'm using Oracle 9.2.0.6.
    I was curious to know if there was any way I could write a more efficient query to determine if a string value contains only numbers.
    Here's my current query. This SQL is from a sub query in a Join clause.
    select distinct cta.CUSTOMER_TRX_ID, to_number(cta.SALES_ORDER) SALES_ORDER
                from ra_customer_trx_lines_all cta
                where length(cta.SALES_ORDER) = 6
                and cta.SALES_ORDER is not null
                and substr(cta.SALES_ORDER,1,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,2,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,3,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,4,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,5,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,6,1) in('1','2','3','4','5','6','7','8','9','0')This is a string where I'm finding A-Z-a-z characters and '/' and '-' characters in all 6 positions, plus there are values that are longer than 6 characters. That's what the length(cta.SALES_ORDER) = 6 is for. Also, of course. some cells are NULL.
    So the question is, is there a more efficient way to screen out only the values in this field that are 6 character numbers or is what I have the best I can do?
    Thanks,

    I appreciate all of your very helpfull workarounds. The cost is a little better in all cases than my original where clause.
    To address the discussion that's popped up about design from this question, I can say a few things that should clear , at least, my situation up.
    First of all this custom quoting , purchase order , and sales order entry system WAS written by a bunch a of 'bad' coders who didn't document their work and then left. We don't even have an ER diagram
    The whole project that I'm only a small part of is literally trying to put Humpty Dumpty together again and then move it from a bad custom solution into Oracle Applications.
    We're rebuilding, documenting, and doing ETL. This is one of your prototypical projects from hell.
    It's a huge database project so we're taking small bites as a time. Hopefully, somewhere right before Armageddon hits, this thing will be complete.
    But until then,..., well,..., you know the drill.
    Thanks Again.

  • Most efficient way to use thumbnails of multiple sizes

    When a user submits an image on my website, the upload script
    currently creates thumbnails in three different sizes (120px, 90px,
    and 20px). Different thumbnail sizes are used in different areas of
    the site.
    Is there a more storage-efficient way to display high-quality
    thumbnails in different sizes, without requiring a separate
    thumbnail file for each size used?
    I cannot rely on browsers to resize images as the quality is
    often very undesirable.

    AngryCloud wrote:
    > I may not have been clear in my last post...
    >
    > When an image is viewed normally on a page, it is saved
    to the client's
    > computer so that it will load instantly the next time
    the image is called for.
    >
    > I do not want visitors to have to wait for the same
    images they have already
    > seen to re-download and resample. A file of each
    resampled image should be
    > saved to the client's computer to avoid this.
    >
    >
    Is it possible to save resampled images on a page to the
    client's
    > computer?
    It sounds like your page needs to check whether the cached
    version
    exists before creating the a new image, otherwise its always
    going to
    create a new version of the image and send it to the
    browser... but I
    don't know how this is possible.
    Dooza
    Posting Guidelines
    http://www.adobe.com/support/forums/guidelines.html
    How To Ask Smart Questions
    http://www.catb.org/esr/faqs/smart-questions.html

  • Most efficient way to loop though many similarlly named fields?

    Hi,
    I have a 5 page document with each page containing appx. 50 similarly named fields.    E.g. Viol1Num, Viol2Num, Vio3Num ...  Viol50Num.
    I am looking for an efficient way of programming a loop to look at each field in Javascript so I can do some manipulations in those fields on what the user entered.
    In FormCalc I've previously used the 'foreach' function similar to:
    foreach (Field1, Field2, Field3.....Field50) do
         'BLAH'
    endfor
    however, that gets really lengthy, especially when dealing with subsequent pages where I have to start adding 'topmostSubform.Page2.' in front of each field name so that I can access from the first page all of the fields on subsequent pages.  Also, I need to do this in Javascript, not FormCalc.
    For example, in JS I am using this loop to mark all fields as read only:
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readOnly";
    How could I do something similar to that so I could look through each field and perform actions on it without having to list out every single field name?
    I tried altering that to look at fields instead of field properties, but I couldn't get it to run.
    Thanks.

    If this is an LCD form then you're better off asking over at the LCD forum.

  • Most efficient way to loop through similarly named fields?

    Hi,
    I have a 5 page document with each page containing appx. 50 similarly named fields.    E.g. Viol1Num, Viol2Num, Vio3Num ...  Viol50Num.
    I am looking for an efficient way of programming a loop to look at each field in Javascript so I can do some manipulations in those fields on what the user entered.
    In FormCalc I've previously used the 'foreach' function similar to:
    foreach (Field1, Field2, Field3.....Field50) do
         'BLAH'
    endfor
    however, that gets really lengthy, especially when dealing with subsequent pages where I have to start adding 'topmostSubform.Page2.' in front of each field name so that I can access from the first page all of the fields on subsequent pages.  Also, I need to do this in Javascript, not FormCalc.
    For example, in JS I am using this loop to mark all fields as read only:
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readOnly";
    How could I do something similar to that so I could look through each field and perform actions on it without having to list out every single field name?
    I tried altering that to look at fields instead of field properties, but I couldn't get it to run.
    Thanks.

    I have solved my issue.   It took some battling in javascript using xfa.resolveNode.
    I have 5 pages, each consisting of a series of 60 fields named Viol1Num, Viol2Num, Viol3Num .... Viol60Num.
    If when this javascript runs, it detects a blank field, then insert a '3' into it.
    The below is the javascript which runs for the second page of this document.
    while (LoopCounter < 61) {
    if ((LoopCounter != 21) && (LoopCounter != 22)) {
    if((xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue == null) | (xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue == "")) {
    xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue = 3;
    LoopCounter = LoopCounter + 1

  • Most efficient way to track 3 states?

    In a program I am writing, I have an object with three states, which it progresses through during the corse of the program. Since the program uses a lot of these objects (potentially) I want to keep the size of them down. Here's the issue. How to store what state the object is in? a boolean won't work, of corse, since there are three states. So here's what I can up with:
    Either a Boolean (initial state is null, then false, then true)
    or a byte (any three values would work)
    I have a feeling that the byte is more efficient, but want to make sure. Also, if I'm missing an easy, efficient way to do this, tell me.

    Nearly there:
    <code>
    public abstract class State {
    abstract void handleState();
    abstract int getState();
    public class State1 extends State {
    void handleState() {
    //do state 1 specific stuff
    int getState() {
    return 1;
    public class State2 extends State {
    void handleState() {
    //do state 2 specific stuff
    int getState() {
    return 2;
    public class State3 extends State {
    void handleState() {
    //do state 3 specific stuff
    int getState() {
    return 3;
    public class StateController {
    private State currentState = new State1();
    public int getState() {
    return currentState.getState();
    public void handleState() {
    currentState.handleState();
    </code>
    might work ;)
    As to booleans... no they cannot be null actually... they are represented by a bit which is 0 or 1 (or if you prefer true of false).
    Boolean b = null;
    is not a null boolean, it is an uninitialised object.
    If you need an Object that you want to be a boolean then it is fine to use the Boolean class :)
    The reason boolean b: b == null doesn't compile is because you cannot compare different types.

  • Most efficient way to load XML file data into tables

    I have a complex XML file running into MBs. I want to load it's data into 7-8 tables.
    Which way will be better:
    1) Use SQL Loader to actually load directly into the 7-8 tables directly by modifying the control card.
    Is this really possible and feasible? I am not even sure about it
    2) Load data as XML Type in a table and register it. Then extract from there to load into various tables.
    Please help. I have to find the most efficient way of doing it.
    Regards,
    Sudhir

    Yes it is possible to use SQL*Loader to parse and load XML, but that is not what it was designed for and so is not recommended. You also don't need to register a schema, just to load/store/parse XML in the DB either.
    So where does that leave you?
    Some options
    {thread:id=410714} (see page 2)
    {thread:id=1090681}
    {thread:id=1070213}
    Those talk some about storage options and reading in XML from disk and parsing XML. They should also give you options to consider. Without knowing more about your requirements for the effort, it is difficult to give specific advice. Maybe your 7-8 tables don't exist and so using Object Relational Storage for the XML would be the best solution as you can query/update tables that Oracle creates based off the schema associated to the XML. Maybe an External Table definition works better for reading the XML into the system because this process will happen just once. Maybe using WebDAV makes more sense for loading XML to be parsed (I don't have much experience with this, just know it is possible from what I've read on the forums). Also, your version makes a difference as you have different options available depending upon the version of Oracle.
    Hope all that helps as a starter.
    Edited by: A_Non on Jul 8, 2010 4:31 PM
    A great example, see the answers by mdrake in {thread:id=1096784}

Maybe you are looking for

  • Application Diagnostics not working

    Hi, In our clone instance "Application Diagnostics" is not working. When I click on Application Diagnostics>Diagnose, then the page is coming but the tests are not visible. It was working fine couple of days ago. Earlier an apps DBA ran a concurrent

  • Need Images To Open Up From Scrollbar

    Hi all. I am trying to get pictures to pop up in a box when the cursor rolls over one of the thumbnails in a scroll bar i have created. i know it is possible because i have seen it on websites before, but am stuck in an area and don't know how to res

  • Video import

    Hi, I've been trying to embed a flv into swf but there seems to be some problem. Flash's import wizard keeps telling me that the codec is missing and therefore cannot edit. (When I proceed, only the first frame is embedded throughout the movie and no

  • No WiFi when in Bootcamp with Windows 7

    I have a MBP running Yosemite with Bootcamp running Windows 7 installed. My problem is that no matter how many wifi drivers I try in Windows I cannot get the wireless card to pull up any networks. It works fine in OS X - sees all networks and connect

  • Change Report builder caption of a label

    I want to change caption of a label on the report I build using Report Builder. Any information is great appreciated, Regards, Iccsi