Advice needed: Efficient way to scan users

Hi all,
I wish to know the efficient way to scan users in Lighthouse. I need to write a workflow that checkout all the users and perform some updates. This workflow should run everyday at midnight.
I have created a scanner myself. Basically what It did are:
1. call FormUtils.getUsers method to return all users' name into a variable.
2. loop through this list and call a subprocess workflow to process every user. This subprocess checks out a user view, performs updates, and then checks in view.
This solution is not efficient at all since it causes my JVM to be Out of Memory. (1G RAM assigned to JVM with about 78,000 users)
Any advice is highly appreciated. Thank you.
Steve

Ok...I know understand what you are doing and why you need this.
A long, long, long time ago (back in 3.x days) the deferred task scanner was really bad. Its nightly scan would scan ALL users each time. This is fine when your client had 4k users...but not when it has 140k users.
Additionally, the "set deferred task" function had problems with two tasks with the same name "i.e. disable resource" since it used the name as the xml object name which can not be duplicated.
soooo, to beat this I rewrote the deferred task handler to allow me to do all of this. Part of this was to add a searchable field called 'nextTaskDate' on the user object. After each workflow this 'date" is updated so it is always correctly populated with the users "next deferred task date"
each night the scanner runs and querys all users with a nextTaskDate of today. This then gives us a result set that we can iterate over instead of having to list each user and search for tasks. It's a billion times faster.
Your best bet is to store the task date in miliseconds and make your query a "all users with next task date BEFORE now"...this way if the server is hosed you can execute tasks you may have missed.
We have an entire re-usable implmentation framework that we have patented (of which this code is a part) that answers most of these types of issues you are bringing up. It makes these implementations much much simpler, faster, scalable and maintainable.
this make sense?
Dana Reed
AegisUSA
Denver, CO 80211
[email protected]
773.412.3782
"Now hiring best-in-class IdM architects. Inquire via emai"

Similar Messages

  • Advice needed: The way to solve out of memory problem (or the way to work with big csv files)

    Hello:)
    I'm in trouble: I have a big csv file (over 5gb of web-analytics data) and my 64 bit excel (and 6gb ram)
    I cant load file to data model because of it's size. There is an error "out of memory" in power query. 
    This is the first time when I encountered such a problem.
    What options do I have to work with such a file? To increase memory in my computer? Would it solve the problem? How much do I need to work with 6gb csv? 
    Or may be I can upload my data somewhere to azure and work with it there? 
    So the problem - is there any way to deal with big files using power query? Or I need to become a developer and learn sql or other languages? 
    Thanks in advance.
    Max

    Hi Miguel!
    Thanks for your answer. 
    I've tried to load this file on virtual pc from azure cloud with this config:
    I have increased memory limit in power query settings:
    And still, the proble is the same:
    What I do wrong? 

  • What is the most time efficient way to scan massive amounts of text data with LabVIEW?

    I am currently running an application that scans data in a text files for outliers.  After each file is scanned, statistics data stored in a database (if there are outliers); so at least the memory in the computer will not be eaten up.  In order to scan lines of data without killing the computer,  I put a 1 millisecond delay in the scanning loop.  I have massive amounts of data in thousands of files to scan.  Taking one milisecond per line of data is taking too much time.  At this rate, it will take over a WEEK to scan all the data!  Is there anything I can do to minimize the time per line scan?  If anybody knows, I need a solution.  If anybody thinks or knows there is NO solution, I need to hear that feedback too! 

    I would use queue to pass the data to the processing task. You could put some intelligence in your file reading task to hold off reading a new file until the processing task has completely processing the data. However I suspect you should be able to process things fairly quickly. The suggestion to include a Wait 0 is a good one. You should always avoid writing repetitive tasks with no ability for the NI scheduler to perform a context switch. Though the various queue VIs allow the system to context switch if required.
    Mark Yedinak
    "Does anyone know where the love of God goes when the waves turn the minutes to hours?"
    Wreck of the Edmund Fitzgerald - Gordon Lightfoot

  • Need Video Chat for several users / one platform

    Need Video Chat for several users / one platform
    I thought Facetime might be good, but it works only for Apple. So, Skype it is.
    Need a way to let users to video chat. Any suggestions?

    You can use Skype group video call which is free.
    Want to record Skype calls, check out at:
    hereandhere

  • Is there a way to identify user accounts that need to be locked?

    Hi,
    I am trying to write a script that will lock user accounts for employees that are being outprocessed (e.g. quit, fired, went to a different project).  The trouble I'm having is that the way I'm notified is by email from security that a person (first and last name provided in the email) is being outprocessed.  However, that individual may have multiple accounts and the account names don't always follow the same format like 'first initial last name'.  For example, I may have a user named John Doe with accounts like jdoe_sensor1, jdoe_sensor2, etc.  Then there could be a user Alice Smith with account like alice_s_sensor1, alice_s_sensor2, etc.  I know I can use OEM to lock users, but there are two main problems with that.  1 -- Finding the users, then clicking on each user and then locking them one by one.  And 2 -- I may not need to lock them right away.  For example, the email from security may say "Lock all accounts for FIRSTNAME LASTNAME at the end of the day on a certain date.  So I was hoping to write a script to identify the accounts, lock the user, and then verify they were locked and run it in cron, so the accounts get locked when they're supposed to.  An example of the SQL statements I'm thinking of are:
    SELECT username, user_id, account_status FROM dba_users WHERE username like upper ('%$user%');
    ALTER user $user ACCOUNT LOCK;
    SELECT username, user_id, account_status FROM dba_users WHERE username like upper ('%$user%');
    So basically, I need a way to find out what the possible combinations are for $user.  Is there a view besides dba_users which has more detailed information like first name and last name?  I'm thinking if there is, then I can query that and find out all the accounts that user has and then plug those into the lock script.    
    Thanks!
    Jon

    There is a very large problem with being given only a person's name and not their user ids.
    For example, if you have two people with same (or similar) name, then what?
    John Doe
    John J. Doe
    This seems to be very common, and even more so with some very common names:
    Smith
    Chin
    etc
    So even if you have a lookup table:
    Name
    Userid
    John Doe
    johndoe
    John Doe
    jdoe
    John J. Doe
    johnd
    J. Doe
    jdoe2
    John D
    john_d
    Jon Doe
    jond
    Jim Doe
    jidoe
    Johnny Doe
    jonydoe
    Really, nowadays, with different policies, practices, etc, I've seen all manner of userids. When you're given somebody to "close down", you should really press them to provide userids, not just first name, last name.
    After all, if they tell you to lock all "John Doe's" accounts, how do you know that the id "johnd" isn't supposed to be locked? or even "jond" ??  You really have no idea. Did security mean "John J. Doe" and didn't provide his initial? What if they both happen to have J middle initial, but once's just registered with the company because the other one existed?
    My thought: If you're not given the specific userid(s), you're running a pretty good risk (at some point in time) that you will lock an id you shouldn't, or not lock an id you should.

  • Need a way to find out if a DB link is being used and by how many users

    Hi,
    I have a dblink from database FINDB to TESTDB named "ftslink".
    I need a way to find out if the dblink is being used and also how many users are acessing it at any instant of time.
    Regards.

    Hye,
    After some searching i found what i wanted.
    Check the below link on dblink -
    [https://netfiles.uiuc.edu/jstrode/www/orasql/db_link.html|https://netfiles.uiuc.edu/jstrode/www/orasql/db_link.html]
    Thanks.

  • Efficient way of saving documents uploaded by users

    Hello Experts,
    We are looking out for an efficient way of storing of documents uploaded by the users. Below is the explanation of our scenario in detail.
    We are working on the SAP E-Recruiting module and have designed custom Interactive Adobe Forms & Web Dynpro Components for the client. An end user (initiating manager) would fill in the form & upload file(s) in support of his statement & click on the "submit" button. As of now we are achieving the upload functionality through WDA's FileUpload ui element & dumping the file contents into a field of type RAWSTRING.
    We now want to change our approach of using the RAW_STRING field & go for a more efficient one. I have been googling around and read a bit about approaches like:
    Class CL_BDS_DOCUMENT_SET
    Class CL_FITV_GOS
    Function Module BDS_BUSINESSDOCUMENT_CREATEF
    But all these 3 approaches seem to be targetted towards some particular business object. They expect some sort of class name or the other to be passed on to them as input. However ours is a complete custom requirement wherein the entire forms data is going into a Z-table so none of these approaches would hold good. (Please correct me if I am wrong when I say that coz I haven't personally worked on any of them till date!) Awaiting your expert opinions on this matter.
    Regards,
    Uday
    Any ideas please?
    Edited by: Uday Gubbala on Mar 4, 2010 10:56 AM

    See if this sample code helps....we hav DMS ( Document management System )
    MOVE: 'DRW' TO DOCUMENTDATA-DOCUMENTTYPE,
            'C:\Users\sci30\Desktop\test.doc' TO
             DOCUMENTDATA-DOCFILE1,
            'TEST DESCRIPTION' TO  DOCUMENTDATA-DESCRIPTION.
    *          move 'WR' to documentdata-STATUSINTERN.
      MOVE 'WRD' TO DOCUMENTDATA-WSAPPLICATION2.
      CLEAR : WA_OBJ_LINK.
      MOVE 'MARA' TO WA_OBJ_LINK-OBJECTTYPE.
      MOVE HEADDATA-MATERIAL TO WA_OBJ_LINK-OBJECTKEY.
      APPEND WA_OBJ_LINK TO GT_OBJ_LINK.
      CALL FUNCTION 'BAPI_DOCUMENT_CREATE2'
        EXPORTING
          DOCUMENTDATA               = DOCUMENTDATA
    *         HOSTNAME                   =
    *         DOCBOMCHANGENUMBER         =
    *         DOCBOMVALIDFROM            =
    *         DOCBOMREVISIONLEVEL        =
    *         CAD_MODE                   = ' '
    *         PF_FTP_DEST                = ' '
    *         PF_HTTP_DEST               = ' '
    *         DEFAULTCLASS               = 'X'
        IMPORTING
    *         DOCUMENTTYPE               =
    *         DOCUMENTNUMBER             =
    *         DOCUMENTPART               =
    *         DOCUMENTVERSION            =
         RETURN                     =    RETURN_DOCUBAPI
       TABLES
    *         CHARACTERISTICVALUES       =
    *         CLASSALLOCATIONS           =
    *         DOCUMENTDESCRIPTIONS       =
          OBJECTLINKS                = GT_OBJ_LINK
    *         DOCUMENTSTRUCTURE          =
    *         DOCUMENTFILES              =
    *         LONGTEXTS                  =
    *         COMPONENTS                 =
      COMMIT WORK.

  • Advice needed: is BDB a good fit for what I aim at?

    Hello everyone,
    I'm not a BDB user (yet), but I really think that this the BDB library
    IS the perfect fit for my needs.
    I'm designing an application with a "tricky" part, that requires a very fast
    data storage/retrieval solution, mainly for writes (but for reads too).
    Here's a quick summary of this tricky part, that should at least use
    2 databases:
    - the first db will hold references to contents, with a few writes per hour
    (the references being "pushed" to it from a separate admin back end), but
    expected high numbers of reads
    - the second db will log requests and other events on the references
    contained in the first db: it is planned that, on average, one read from DB1
    will produce five times as much writes into DB2.
    To illustrate:
    DB1 => ~25 writes / ~100 000 reads per hour
    DB2 => ~500 000 writes / *(60?) reads per hour
    (*will explain about reads on DB2 later in this post)
    Reads and writes on both DBs are not linear, say that for 500 000 writes
    per hour, you could have the first 250 000 being done within 20 minutes,
    for instance. There will be picks of activity, and low activity phases
    as well.
    That being said, do the BDB experts here think that BDB is a good fit for
    such a need? If so or if not, could you please let me know what makes you
    think what you think? Many thanks in advance.
    Now, about the "*(60?) reads per hour" for BD2: actually, data from DB2
    should be accessed in real time for reporting. As of now, here is what
    I thing I should do to insure and preserve a high rate throughput not to
    miss any write in DB2 => once per minute another "DB2" is created that will
    now record new events. The "previous" DB2 is now dumped/exported into another
    database which will then be queried for real-time (not exactly real-time,
    but up to five minutes is an acceptable delay) reporting.
    So, in my first approach, DB2 is "stopped" then dumped each minute, to another
    DB (not necessarily BDB, by the way - data could probably re-structured another
    way into another kind of NoSQL storage to facilite queriing and retrieval
    from the admin back end), which would make 60 reads per hour (but "entire"
    reads, full db)
    The questions are:
    - do you think that renewing DB2 as often would improve or strain performances?
    - is BDB good and fast at doing massive dumps/exports? (OK: 500 000 entries per
    hour would make ~8300 entries per minute on average, so let's say that a dump's
    max size is 24 000 rows of data)
    - would it or not be better to read directly into the current DB2 as it is
    storing (intensively) new rows, which would then avoid the need to dump each
    minute and then provide more real-time features? (then would just need a daily
    dump, to archive the "old" data)
    Anyone who has had to face such questions already is welcome, as well as
    any BDB user who think they can help on this topic!
    Many thanks in advance for you advice and knowledge.
    Cheers,
    Jimshell

    Hi Ashok
    Many thanks for your fast reply again :)
    Ashok_Ora wrote:
    Great -- thanks for the clarification.Thank YOU, my first post was indeed a bit confusing, at least about the reads on DB2.
    Ashok_Ora wrote:
    Based on this information, it appears that you're generating about 12 GB/day into DB2, which is about a terabyte of data every 3 months. Here are some things to consider for ad-hoc querying of about 1 TB of data (which is not a small amount of data).That's right, this is quite a huge lot of data, and will keep growing, and growing... Although the main goal of the app is to be able to achieve (almost) real time reporting, it will also need to be able (potentially) to compute data over different time ranges, including yearly ranges for instance - but in this case, the real time capabilities wouldn't be relevant, I guess: if you look at some data on a year span, you probably don't need it to be accurate on a dayly interval, for instance (well, I guess), so this part of the app would probably only use the "very old" data (not the current day data), whatever it is stored in...
    Ashok_Ora wrote:
    Query performance is dramatically improved by using indexes. On the other hand, indexing data during the insert operation is going to add some overhead to the insert - this will vary depending on how many fields you want to index (how many secondary indices you want to create). BDB automatically indexes the primary key. Generally, any approach that you consider for satisfying the reporting requirement will benefit from indexing the data.> Thanks for pointing that out! I did envisage using indexes, but my concern was (and you guessed it) the expectable overhead that it brings. At this stage (but I may be wrong, this is just a study in progress, that will also need proper tests and benchmarking), I plan to favour write speed over anything else, to insure that all the incoming data is indeed stored, even if it is quite tough to handle in the primary stored form.
    I prefer to envisage (but again, it's not said that it is the right way of doing it) very fast inserts, then possibly re-process (sort of) the data later, and (maybe? certainly?) elsewhere, in order to have it more "query friendly" and efficient for moderately complex queries for legible reports/charts.
    Ashok_Ora wrote:
    Here are some alternatives to consider, for the reporting application:
    - Move the data to another system like MongoDB or CouchDB as you suggest and run the queries there. The obvious cost is the movement of data and maintaining two different repositories. You can implement the data movement in the way I suggested earlier (close "old" and open "new" periodically).This is pretty much "in line" with what I had in mind when posting my question here :).
    I found out in several benchmarks (there are not a lot, but I did find some ^^) that BDB amongst others is optimized for bunch queries, say that retrieving a whole lot of data is faster that, for instance, retrieving n times the same row. Is that right? Now, I guess that this is tightly related to the configuration and the server's performances...
    The process would then feed data into a new "DB2" instance every 60 seconds, and "dumping"/merging the previous one into another DB (BDB or else), which would grow until some defined limit.
    Would the "old DB2" > "main, current archive" be a heavy/tricky process, according to you? Especially as the "archive" DB is growing and growing - what would be a decent "limit" to take into account? I guess that 1TB for 3 months of data would be a bit big, wouldn't it?
    Ashok_Ora wrote:
    - Use BDB's SQL API to insert and read data in DB1 and DB2. You should be able to run ad-hoc queries using SQL. After doing some experiments, you might decide to add a few indices to the system. This approach eliminates the need to move the data and maintaining separate repositories. It's simpler.I read a bit about it, and this is indeed very interesting capabilities - especially as I know how to write decent SQL statements.
    That would mean that DB2 could grow more than just within a 60 seconds time span - but would this growing alter the write troughput? I guess so... This will require proper tests, definitely.
    Now, I plan the "real" data (the "meaningfull part of the data"), except timestamps, to be stored in quite a "NoSQL" way (this term is "à la mode"...), say as JSON objects (or something close to it).
    This is why I envisaged MongoDB for instance as the DB layer for the reporting part, as it is able to query directly into JSON, with a specific way to handle "indexes" too. But I'm no MongoDB expert in any way, so I'm not sure at all, again, that it is a good fit (just as much as I'm not sure right know what the proper, most efficient approach is, at this stage).
    Ashok_Ora wrote:
    - Use the Oracle external table mechanism (Overview and how-to - http://docs.oracle.com/cd/B28359_01/server.111/b28319/et_concepts.htm) to query the data from Oracle database. Again, you don't need to move the data. You won't be able to create indices on the external tables. If you do want to move data from the BDB repository into Oracle DB, you can run a "insert into <oracle_table> select * from <external_table_in_DB2>;". As you know, Oracle database is excellent database for all sorts of applications, including complex reporting applications.
    This is VERY interesting. VERY.
    And Oracle DB is, you're, a very powerful and flexible database for every kind of processes.
    I'll look into the docs carefully, many thanks for pointing that out (again!) :)
    I have not yet decided if the final application would be free nor open source, but this will eventually be a real question. Right now, I don't want to think of it, and just find the best technical solution(s) to achieve the best possible results.
    And BDB and Oracle DB are very serious competitors, definitely ;)
    Ashok_Ora wrote:
    Hope this was helpful. Let me know your thoughts.It definitely is so much useful! Makes things clearer and allow me to get more into BDB (and Oracle as well with your latest reply), and that's much appreciated. :)
    As I said, my primary goal is to insure the highest write throughput - I cannot miss any incoming data as there is no (easy/efficient) way to re-ask for what would be lost and get it again being sure that it hadn't changed (the simple act of re-asking would induce data flaws, actually).
    So, everything else (including reporting, stats, etc.) IS secondary, as long as what comes in is always stored for sure (almost) as soon as it comes in.
    This is why, in this context, "real" real-time is not really crucial, an can be "1 minute delayed" real time (could even be "5 minute delayed", actually, but let's be a bit demanding ^^).
    Ashok_Ora wrote:
    Just out of curiousity, can you tell us some additional details about your application?Of course, I owe you a bit more details as you help me a lot in my research/study :)
    The application is sort of a tracking service. It is primarily thought to serve the very specific needs of a client of mine: they have several applications that all use the same "contents". Those contents can be anything, text, HTML, images, whatever, and they need to know almost in real time what application (used by which external client/device) is requesting ressources, which ones, from where, in which locale/area and language, etc.
    Really a kind of "Google Analytics" stuff (which I pointed out at the very beginning, but they need something more specific, and, above all, they need to keep all the data with them, so GA is not a solution here).
    So, as you can guess, this is pretty much... big. On the paper, at least. Not sure if this will ever be implemented one day, to be honest with you, but I really want to do the technical study seriously and bring the best options so that they know where they plan to go.
    As of me, I would definitely love it if this could become reality, this is very interesting and exciting stuff. Especially as it requires to see things as they are and not to fall into the "NoSQL fashion" for the sake of being "cool". I don't want a cool application, I want an efficient one, that fits the needs ;) What is very interesting here is that BDB is not new at all, though it's one of the most serious identified players so far!
    Ashok_Ora wrote:
    Thanks and warm regards.
    ashokMany thanks again, Ashok!
    I'll leave this question opened, in order to keep on posting as I'm progressing (and to be able to get your thoughts and rewarding comments and advice above all :) )
    Cheers,
    Jimshell

  • SQL query with multiple tables - what is the most efficient way?

    Hello I am learning PL/SQL. I have a simple procedure where I need to find number of employees and departments per location as per user input of location_id.
    I have 3 Tables:
    LOCATIONS
    location_id (pk)
    location_name
    DEPARTMENTS
    department_id (pk)
    location_id (fk)
    department_name
    EMPLOYEES
    employee_id (pk)
    department_id (fk)
    employee_name
    1 Location can have 0-MANY Departments
    1 Employee has 1 Department
    Here is the query I came up with for PL/SQL procedure:
    /*Ecount, Dcount are NUMBER variables */
    SELECT SUM (EmployeeCount), COUNT(DepartmentNumber)
         INTO Ecount, Dcount
         FROM     
         (SELECT COUNT(employee_id) EmployeeCount, department_id DepartmentNumber
              FROM employees
              GROUP BY department_id
              HAVING department_id IN
                        (SELECT department_id
                        FROM departments
                        WHERE location_id = userInput));
    I do get the correct result, but I am just wondering if my query is on the right track and if there is a more "efficient" way of doing this.
    Thanks in advance for helping a newbie out.

    Hi,
    Welcome to the forum!
    Something like this will be more efficient:
    SELECT    COUNT (employee_id)               AS ECount
    ,       COUNT (DISTINCT department_id)     AS DCount
    FROM       employees
    WHERE       department_id IN (     SELECT     department_id
                        FROM      departments
                        WHERE      location_id = :userInput
    ;You should also try a join instead of the IN subquery.
    For efficiency, do only the things you need to do.
    For example, you don't need a count of employees in each department, so don't compute one. That means you won't need the in-line view, so don't have one.
    You don't need PL/SQL for this job, so don't use PL/SQL if you don't have to. (I realize this question was out of context, so you may have good reasons for doing this in PL/SQL.)
    Do all filtering as early as possible. Don't waste effort computing things that won't be used .
    A particular example of this is: Never use a HAVING clause when you can use a WHERE clause. What's the difference between a WHERE clause and a HAVING clause? The WHERE clause is applied before aggregate functions are computed, and the HAVING clause is applied after; there's no other difference. Therefore, if the HAVING clause isn't referencing an aggregate function, it could be done in a WHERE clause instead.

  • Most efficient way to use thumbnails of multiple sizes

    When a user submits an image on my website, the upload script
    currently creates thumbnails in three different sizes (120px, 90px,
    and 20px). Different thumbnail sizes are used in different areas of
    the site.
    Is there a more storage-efficient way to display high-quality
    thumbnails in different sizes, without requiring a separate
    thumbnail file for each size used?
    I cannot rely on browsers to resize images as the quality is
    often very undesirable.

    AngryCloud wrote:
    > I may not have been clear in my last post...
    >
    > When an image is viewed normally on a page, it is saved
    to the client's
    > computer so that it will load instantly the next time
    the image is called for.
    >
    > I do not want visitors to have to wait for the same
    images they have already
    > seen to re-download and resample. A file of each
    resampled image should be
    > saved to the client's computer to avoid this.
    >
    >
    Is it possible to save resampled images on a page to the
    client's
    > computer?
    It sounds like your page needs to check whether the cached
    version
    exists before creating the a new image, otherwise its always
    going to
    create a new version of the image and send it to the
    browser... but I
    don't know how this is possible.
    Dooza
    Posting Guidelines
    http://www.adobe.com/support/forums/guidelines.html
    How To Ask Smart Questions
    http://www.catb.org/esr/faqs/smart-questions.html

  • Major advice needed with ATT / IPHONE / MICROCELL misleading/doesn't work

    I'm sorry if this is a little off topic, but I need some advice.
    I've been battling ATT since thanksgiving week. I moved into a house (checked the coverage map before moving, and saw I was in the clear)
    I have no service here. NOTHING. I had to use a verizon phone to call them, since Verizon actually works. They tell me they're working on it... weeks go by. A month, and another month of calling them and wondering what they could actually be doing to rectify my problem. Well about 2 weeks ago, ATT's RF engineer was hanging around my house for a couple of days. He told me they didn't even notify him about my problem until days before he arrived! So they were lying to me the whole time about trying to fix my lack of service. I kept asking them, "Can't you lend me a microcell until you fix what's wrong here?" They just said "No, you have to buy that and pay for it's service."
    So it's now Feb 25th, and it's day II with my microcell. It shows 5 bars, but most of my calls are dropped, or I'm talking to someone that can't hear me. I emailed them last night asking if I could just be let go from this contract so I can move on and get back to business. They will waive my fee if I give them my Iphone 4 which I've owned for 11 months. Doesn't this seem a little strange??
    I mentioned how I've lost many clients and a lot of money due to the fact that they think I'm ducking their calls because I don't receive them unless I drive 2 miles from where I live.
    Again, today I talked with the service rep and they will not budge. Is there something in the contract that would help me out? Even the RF engineer/tech told me there is nothing they can do (other than putting up more towers!)
    So I'm stuck in this nightmare with a phone that doesn't work, and paying for it. I still have a year on my contract and I just want out. The iphone was fun and all but I'm done with att and their ransom fees.
    Can someone help me?
    Again...sorry if this isn't specifically an apple related post, but I don't know of another reputable place to ask advice

    As you know, this is a user-ro-user iPhone tech support site. Neither Apple or AT&T participate here. The only way you can resolve your reception and service issues is through AT&T - no one here could possibly fix this. AT&T does provide free MicroCell service in some known poor coverage areas.
    You can try some basic troubleshooting steps as described in the User Guide - Restart, Reset, Restore - but I doubt there is a problem in the phone itself.
    IF you're not getting anywhere with AT&T, file a complaint with the FCC. Did the AT&T coverage map showed coverage in your area before you purchased the phone? You might want to get a lawyer to help.
    Message was edited by: modular747

  • Most efficient way to loop though many similarlly named fields?

    Hi,
    I have a 5 page document with each page containing appx. 50 similarly named fields.    E.g. Viol1Num, Viol2Num, Vio3Num ...  Viol50Num.
    I am looking for an efficient way of programming a loop to look at each field in Javascript so I can do some manipulations in those fields on what the user entered.
    In FormCalc I've previously used the 'foreach' function similar to:
    foreach (Field1, Field2, Field3.....Field50) do
         'BLAH'
    endfor
    however, that gets really lengthy, especially when dealing with subsequent pages where I have to start adding 'topmostSubform.Page2.' in front of each field name so that I can access from the first page all of the fields on subsequent pages.  Also, I need to do this in Javascript, not FormCalc.
    For example, in JS I am using this loop to mark all fields as read only:
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readOnly";
    How could I do something similar to that so I could look through each field and perform actions on it without having to list out every single field name?
    I tried altering that to look at fields instead of field properties, but I couldn't get it to run.
    Thanks.

    If this is an LCD form then you're better off asking over at the LCD forum.

  • Most efficient way to loop through similarly named fields?

    Hi,
    I have a 5 page document with each page containing appx. 50 similarly named fields.    E.g. Viol1Num, Viol2Num, Vio3Num ...  Viol50Num.
    I am looking for an efficient way of programming a loop to look at each field in Javascript so I can do some manipulations in those fields on what the user entered.
    In FormCalc I've previously used the 'foreach' function similar to:
    foreach (Field1, Field2, Field3.....Field50) do
         'BLAH'
    endfor
    however, that gets really lengthy, especially when dealing with subsequent pages where I have to start adding 'topmostSubform.Page2.' in front of each field name so that I can access from the first page all of the fields on subsequent pages.  Also, I need to do this in Javascript, not FormCalc.
    For example, in JS I am using this loop to mark all fields as read only:
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readOnly";
    How could I do something similar to that so I could look through each field and perform actions on it without having to list out every single field name?
    I tried altering that to look at fields instead of field properties, but I couldn't get it to run.
    Thanks.

    I have solved my issue.   It took some battling in javascript using xfa.resolveNode.
    I have 5 pages, each consisting of a series of 60 fields named Viol1Num, Viol2Num, Viol3Num .... Viol60Num.
    If when this javascript runs, it detects a blank field, then insert a '3' into it.
    The below is the javascript which runs for the second page of this document.
    while (LoopCounter < 61) {
    if ((LoopCounter != 21) && (LoopCounter != 22)) {
    if((xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue == null) | (xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue == "")) {
    xfa.resolveNode("topmostSubform.Page2.Viol" + LoopCounter + "Num").rawValue = 3;
    LoopCounter = LoopCounter + 1

  • Most efficient way to load XML file data into tables

    I have a complex XML file running into MBs. I want to load it's data into 7-8 tables.
    Which way will be better:
    1) Use SQL Loader to actually load directly into the 7-8 tables directly by modifying the control card.
    Is this really possible and feasible? I am not even sure about it
    2) Load data as XML Type in a table and register it. Then extract from there to load into various tables.
    Please help. I have to find the most efficient way of doing it.
    Regards,
    Sudhir

    Yes it is possible to use SQL*Loader to parse and load XML, but that is not what it was designed for and so is not recommended. You also don't need to register a schema, just to load/store/parse XML in the DB either.
    So where does that leave you?
    Some options
    {thread:id=410714} (see page 2)
    {thread:id=1090681}
    {thread:id=1070213}
    Those talk some about storage options and reading in XML from disk and parsing XML. They should also give you options to consider. Without knowing more about your requirements for the effort, it is difficult to give specific advice. Maybe your 7-8 tables don't exist and so using Object Relational Storage for the XML would be the best solution as you can query/update tables that Oracle creates based off the schema associated to the XML. Maybe an External Table definition works better for reading the XML into the system because this process will happen just once. Maybe using WebDAV makes more sense for loading XML to be parsed (I don't have much experience with this, just know it is possible from what I've read on the forums). Also, your version makes a difference as you have different options available depending upon the version of Oracle.
    Hope all that helps as a starter.
    Edited by: A_Non on Jul 8, 2010 4:31 PM
    A great example, see the answers by mdrake in {thread:id=1096784}

  • Most efficient way to consume log files

    Hello everyone,
    I've been absent from the forums for awhile but I'm back at it now... 
    I have a question about the most efficient way to consume log files.  I read in Powershell in action, by Bruce Payette that using a switch statement with a regex worked pretty well, that being said I haven't tried it yet. Select-string is working pretty
    well for me but I have about 10 different entry types that I need to search logs for every 5 minutes and I'm scanning about 15 GB of logs at every interval.  Anyway, if anyone has information about how to do something like that as quickly as possible
    I'd appreciate it.
    1.  piping log files that meet my criteria to select-string
       - This seems to work well but I don't like searching the same files over and over again
    2. running logs through get-content and then building a filter statement
      - This is ok but it seems to use up a fair bit of memory
    3. Some other approach that I haven't thought of yet.
    Anyway, I know this is a relatively nebulous question, sorry about that.  I'm hoping that someone on here knows a really good way to find strings in logs files quickly.
    Hope that helps! Jason

    You can sometimes squeeze out more speed at the expense of memory usage, but filters are pretty fast. I don't see a benefit to holding the whole file in memory, in this case.
    As I mentioned earlier, though, C# code will usually blow PowerShell away in terms of execution time.  Here's a rewrite of what I just did (just for the INI Section pattern, to keep the post size down):
    $string = @'
    #Comment Line
    [Ini-Style Section Line]
    Key = Value Line
    192.168.0.1 localhost
    Some line that doesn't match anything.
    Set-Content -Path .\test.txt -Value $string
    Add-Type -TypeDefinition @'
    using System;
    using System.Text.RegularExpressions;
    using System.Collections;
    using System.IO;
    public interface ILineParser
    object ParseLine(string line);
    public class IniSection
    public string Section;
    public class IniSectionParser : ILineParser
    public object ParseLine(string line)
    object o = null;
    Match match = Regex.Match(line, @"^\s*\[([^\]]+)\]\s*$");
    if (match.Success)
    o = new IniSection() { Section = match.Groups[1].Value };
    return o;
    public class LogParser
    public static IEnumerable ParseFile(string fileName, ILineParser[] lineParsers)
    using (StreamReader sr = File.OpenText(fileName))
    string line;
    while ((line = sr.ReadLine()) != null)
    foreach (ILineParser parser in lineParsers)
    object result = parser.ParseLine(line);
    if (result != null)
    yield return result;
    $parsers = @(
    New-Object IniSectionParser
    $results = [LogParser]::ParseFile("$pwd\test.txt", $parsers)
    $results
    Instead of defining separate classes for each type of line and output object, you could probably do something more generic with delegates (similar to how I used ScriptBlock.Invoke() in the PowerShell example), but it might sacrifice some speed to do so.

Maybe you are looking for

  • How to find number of lines in the text content?

    Hello All, I have a multi line text item. I want to know the number of lines in a text item? How can I do that? Note that every lines end with the shift+enter. Example, This is a sample. After line This is a there is (Shift + Enter). Thanks for any h

  • Has anyone had an issues with certain not being able to SMS certain numbers after upgrading to IOS6.1.3

    I upgraded my software on iphone 4s on Tuesday and since then I can no longer send an SMS to my parents (how live in Spain and therefore its a Spanish mobile).  The international dialing code is correct, and I am on a contract therefore no isssues wi

  • One vendor -payments to be made from two house banks

    Hi In our company code, we have a vendor who supplies two kinds of materials, let us say hardware and stationery items. For the supply of hardware materials, I have to pay him from one house bank and for the supplies of stationery, I have to pay him

  • Interface mapping in Interface Determination??

    Hi friends.. iam doing IDOC to File.. in message mapping ,i changed the occurance of IDOC to unbounded and did the same in interface mapping also.. but in intrerface determination i am not getting Interface mapping ,i tried Extended interface mapping

  • Any way to name Spaces desktops?

    I had no install problems with Lion. It's a really good upgrade in many ways. Two things I'd like to see addressed. Is there a way to name desktops under Spaces? Also, can one change the Dashboard background? IMO, the default is just too mundane. Sam