More efficient way to find an element...

I am iterating through a vector to determine if it contains a specific element, as determined by an id. I know there must be a better way to do it than the example below. I looked into LinkedHashMap a little bit. Could that be a lest costly solution?
Any examples would be much appreciated...
String idToFind = 2314;
for(Enumeration e = reallyBigVector.elements();e.hasMoreElements();) {
      String[] nextPerson = (String[]) e.nextElement();
      if(nextPerson [5].equals(idToFind))//nextPerson [5] is their id
           return true;
return false;//didn't find it

btv wrote:
I am iterating through a vector to determine if it contains a specific element, as determined by an id. I know there must be a better way to do it than the example below. I looked into LinkedHashMap a little bit. Could that be a lest costly solution?
Any examples would be much appreciated...
String idToFind = 2314;
for(Enumeration e = reallyBigVector.elements();e.hasMoreElements();) {
String[] nextPerson = (String[]) e.nextElement();
if(nextPerson [5].equals(idToFind))//nextPerson [5] is their id
return true;
return false;//didn't find it
No no no. Don't go storing data in an array of Strings that can be encapsulated in a class. Create a custom Person class holding this data:
public class Person {
  private int id;
  // other attributes and methods
}then override equals(Object) and hashCode() in that class so you can look up your object more easily from a collection class.
More info: [http://java.sun.com/docs/books/tutorial/java/concepts/]

Similar Messages

  • A more efficient way to assure that a string value contains only numbers?

    Hi ,
    I'm using Oracle 9.2.0.6.
    I was curious to know if there was any way I could write a more efficient query to determine if a string value contains only numbers.
    Here's my current query. This SQL is from a sub query in a Join clause.
    select distinct cta.CUSTOMER_TRX_ID, to_number(cta.SALES_ORDER) SALES_ORDER
                from ra_customer_trx_lines_all cta
                where length(cta.SALES_ORDER) = 6
                and cta.SALES_ORDER is not null
                and substr(cta.SALES_ORDER,1,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,2,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,3,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,4,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,5,1) in('1','2','3','4','5','6','7','8','9','0')
                and substr(cta.SALES_ORDER,6,1) in('1','2','3','4','5','6','7','8','9','0')This is a string where I'm finding A-Z-a-z characters and '/' and '-' characters in all 6 positions, plus there are values that are longer than 6 characters. That's what the length(cta.SALES_ORDER) = 6 is for. Also, of course. some cells are NULL.
    So the question is, is there a more efficient way to screen out only the values in this field that are 6 character numbers or is what I have the best I can do?
    Thanks,

    I appreciate all of your very helpfull workarounds. The cost is a little better in all cases than my original where clause.
    To address the discussion that's popped up about design from this question, I can say a few things that should clear , at least, my situation up.
    First of all this custom quoting , purchase order , and sales order entry system WAS written by a bunch a of 'bad' coders who didn't document their work and then left. We don't even have an ER diagram
    The whole project that I'm only a small part of is literally trying to put Humpty Dumpty together again and then move it from a bad custom solution into Oracle Applications.
    We're rebuilding, documenting, and doing ETL. This is one of your prototypical projects from hell.
    It's a huge database project so we're taking small bites as a time. Hopefully, somewhere right before Armageddon hits, this thing will be complete.
    But until then,..., well,..., you know the drill.
    Thanks Again.

  • More efficient way to extract number from string

    Hello guys,
    I am using this Regexp to extract numbers from a string, and I doubt that there is a more efficient way to get this done:
    SELECT  regexp_replace (regexp_replace ( REGEXp_REPLACE ('  !@#$%^&*()_+= '' + 00 SDFKA 324 000 8702 234 |  " ' , '[[:punct:]]',''), '[[:space:]]',''), '[[:alpha:]]','')  FROM dual
    {code}
    Is there a more efficient way to get this done ?
    Regards,
    Fateh                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Or, with less writing, using Perl syntax \D (non-digit):
    SELECT  regexp_replace('  !@#$%^&*()_+= '' + 00 SDFKA 324 000 8702 234 |  " ','\D')
      FROM  dual
    REGEXP_REPLACE(
    003240008702234
    SQL>
    {code}
    SY.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Creating a time channel in the data portal and filling it with data - Is there a more efficient way than this?

    I currently have a requirement to create a time channel in the data portal and subsequently fill it with data. I've shown below how I am currently doing it:
    Time_Ch = ChnAlloc("Time channel", 271214           , 1      ,           , "Time"         ,1                  ,1)              'Allocate time channel
    For intLoop = 1 to 271214
      ChD(intLoop,Time_Ch(0)) = CurrDateTimeReal          'Create time value
    Next
    I understand that the function to create and allocate memory for the time channel is extremely quick. However the time to store data in the channel afterwards is going to be highly dependent on the length I have assigned to the Time_Ch. In my application the length of Time_Ch is variable but could easily be in the order of 271214 or higher. Under such circumstances the time taken to fill Time_Ch is quite considerable. I am wondering whether this is the most appropriate way of doing things or whether there is a more efficient way of creating a time channel and filling it.
    Thanks very much for any help.
    Regards
    Matthew

    Hi Matthew,
    You are correct that there is a more efficient way to do this.  I'm a little confused about your "CurrDateTimeReal" assignment-- is this a constant?  Most people want a Time channel that counts up linearly in seconds or fractions of a second over the duration of the measurement.  But that looks like you would assign the same time value to all the rows of the new Time channel.
    If you want to create a "normal" Time channel that increases at a constant rate, you can use the ChnGenTime() function:
    ReturnValue = ChnGenTime(TimeChannel, GenTimeUnit, GenTimeXBeg, GenTimeXEnd, GenTimeStep, GenTimeMode, GenTimeNo)
    If you really do want a Time channel filled with all the same values, you can use the ChnLinGen() function and simply set the GenXBegin and GenXEnd parameters to be the same value:
    ReturnValue = ChnLinGen(TimeChannel, GenXBegin, GenXEnd, XNo, [GenXUnitPreset])
     In both cases you can use the Time channel you've already created (which as you say executes quickly) and point the output of these functions to that Time channel by using the Group/Channel syntax of the Time channel you created for the first TimeChannel parameter in either of the above functions.
    Brad Turpin
    DIAdem Product Support Engineer
    National Instruments

  • MDX - More efficient way?

    Hi
    I am still learning MDX and have written this code. It needs to recalculate all employees in a cost center (COSTCENTER is a property of the DIM EMPLOYEE) when one of the assumptions (e.g P00205 etc) change. These assumptions are planned on cost center level and planned against employee DUMMY. Is there a more efficient way to write this code as there are lots of accounts that needs to be posted to::
    *SELECT (%EMPLOYEE%, ID, EMPLOYEE, [COSTCENTER]  = %COSTCENTER_SET%)
    //Workmens Comp
    *XDIM_MEMBERSET P_ACCT = "IKR0000642000"
    *FOR %EMP% = %EMPLOYEE%   
             [EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00205],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
    *NEXT
    *COMMIT
    //Fringe Benefits Employer
    *XDIM_MEMBERSET P_ACCT = "IKR0000628100" 
    *FOR %EMP% = %EMPLOYEE%
             [EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00210],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
    *NEXT
    *COMMIT
    //Fringe Benefits Other
    *XDIM_MEMBERSET P_ACCT = "IKR0000626100" 
    *FOR %EMP% = %EMPLOYEE%
             [EMPLOYEE].[#%EMP%] = ( [P_ACCT].[P00209],[EMPLOYEE].[DUMMY ]) * ( [P_ACCT].[P00400],[EMPLOYEE].[%EMP%] )
    *NEXT
    *COMMIT

    Maybe the following?
    *SELECT (%EMPLOYEE%, ID, EMPLOYEE, [COSTCENTER]  = %COSTCENTER_SET%)
    *XDIM_MEMBERSET EMPLOYEE = %EMPLOYEE%
    *XDIM_MEMBERSET P_ACCT = IKR0000642000,IKR0000628100,IKR0000626100
    //Workmens Comp
    [P_ACCT].[#IKR0000642000] = ( [P_ACCT].[P00205],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
    //Fringe Benefits Employer
    [P_ACCT].[#IKR0000628100] = ( [P_ACCT].[P00210],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
    //Fringe Benefits Other
    [P_ACCT].[#IKR0000626100] = ( [P_ACCT].[P00209],[EMPLOYEE].[DUMMY] ) * ( [P_ACCT].[P00400] )
    *COMMIT
    You should probably also restrict explicitly on all other dimensions in your applications so that none are accidentally left open that don't need to be.
    Ethan

  • Linking from one PDF to another: Is there a more efficient way?

    Some background first:
    We make a large catalog (400pages) in Indesign and it's updated every year. We are a wholesale distributor and our pricing changes so we also make a price list with price ref # that corresponded with #s printed in the main catalogue.  Last year we also made this catalog interactive so that a pdf of it could be browsed using links and bookmarks. This is not too difficult using Indesign and making any adjustments in the exported PDF. Here is the part that becomes tedious and is especially so this year:
    We also set up links in the main catalog that go to the price list pdf - opening the page with the item's price ref # and prices... Here's my biggest issue - I have not found any way to do this except making links one at a time in Acrobat Pro (and setting various specifications like focus and action and which page (in the price list) to open) Last year this wasn't too bad because we used only one price list. It still took some time to go through and set up 400-500 links individually.
    This year we've simplified our linking a little by putting only one link per page but that is still 400 links. And this year I have 6 different price lists (price tiers...) to link to the main catalogue pdf. (That's in the neighborhood of 1200-1500 double clicking the link(button) to open Button Properties, click Actions tab, click Add..."Go to page view" , set link to other pdf page, click edit, change Open in to "New Window" and set Zoom.  This isn't a big deal if you only have a few Next, Previous, Home kind of buttons....but it's huge when you have hundreds of links. Surely there's a better way?
    Is there anyway in Acrobat or Indesign to more efficiently create and edit hundreds of links from one pdf to another?
    If anything is unclear and my question doesn't make sense please ask. I will do my best to help you answer my questions.
    Thanks

    George, I looked at the article talking about the fdf files and it sounds interesting. I've gathered that I could manipulate the pdf links by making an fdf file and importing that into the PDF, correct?
    Now, I wondered - can I export an fdf from the current pdf and then change what is in there and import it back into the pdf.  I've tried this (Forms>More Form Options>Manage Form Data>Export Data) and then opened the fdf in a text editor but I see nothing related to the documents links... I assume this is because the links are 'form' data to begin with - but is there away to export something with link data like that described in the article link you provided?
    Thanks

  • Which is more efficient way to get result set from database server

    Hi,
    I am working on a project where I require to query database to fetch result set and then iterate through the resultset. Now, What I want is that I want to create one single java code that would call many different SQLs and create a list out of resultset. There are two approaches for me.
    1.) To create a txt file where I can store my queries. My java program can read this file and get the appropriate query to be used.
    2.) To create a stored procedure containing the queries and call the stored procedure from my java program. Also, not that some of the queries needs to be created dynamically depending upon the parameteters supplied.
    Out of these two approches which is optimum and why?
    Also, following things to be noted.
    1. At times I want to create where clause of the query dynamically depenending upon the parameters passed.
    2. I want one single java file that will handle all database calls.
    3. Paramters to the stored procedure can be passed using array descriptor.
    4. Conneciton I am making using JNDI.
    Please do provide me optimum way of out these two. You may also suggest some other approaches, if any.
    Thanks,
    Rajan
    Edited by: RP on Jun 26, 2012 11:11 PM

    RP wrote:
    In case of queries stored in text files. I will require to replace some pre defined placeholder with actual parameters and then pass that modified query to db engine. Even I liked the second approach as it is more easily maintainable. There are a couple of issues. Shared SQL is one. Irrespective of the method used, the SQL cursor that is created needs to have bind variables. This ensures re-usability of the cursor. This reduces the risk of Shared Pool fragmentation. This lowers hard parsing and reduces CPU utilisation.
    Another issue is flexibility. If the SQL cursors are created by stored procedures, this code resides on the server and abstracts the Java client from the complexities of SQL and SQL performance. The code can easily be updated and fine tuned to deliver faster/better SQL cursors, or modified to take new Oracle features, changes in data model, and so on, into consideration. This stored proc can be updated without having to touch or recompile a single byte of Java client code.
    There's also the security issue. What is more secure? SQL encapsulated in stored procs in a secure database and server environment? Or SQL "encapsulated" in text files on the client?
    The less code you have running on the client, the less code you have running in the wild that can be compromised without having to first compromise the server.
    Only I was worried about any performace issue might happen using this approach. Performance is not a factor of who creates the SQL cursor.
    Whether Java client creates a SQL cursor, or a PL/SQL stored proc creates a SQL cursor, or a .Net client creates a SQL cursor - that SQL cursor does not know what the client is. It does not care what the client is. The SQL cursor performs as well as it is capable of.. given the execution plan, data volumes, server resources and speed/performance of the server.
    The client language and SQL cursor interface used by the client (there are several in PL/SQL), determines the performance of the client's interaction with the cursor (e.g. round trips to the database when interfacing with the cursor). The client language (and its client interface to the cursor) does not dictate the actual performance of that SQL cursor on the database (does not make joins faster, or I/O faster)
    One more question, Will my java program close the cursor that I opened in Procedure?That you need to ask your Java code. Java code leaking ref cursors are unfortunately all too common. You need to make sure that your Java client interface to SQL cursors, closes the cursor handle when done.

  • More efficient ways to deal with query than using IN clause???

    There is a module in an application that I wrote that under certain circumstances executes a query that looks something like the following:
    SELECT field1, field2
    from table
    where id in (1,2............1000)
    OR id in (1001,1002.............2000)
    OR id in (59000, 59001.................60000);
    To clarify... Yes.. I am saying that because of oracle's limitation on 1000 expressions in an IN clause, there would be up to 60 different IN clauses.
    When I initially wrote this module... I wasn't aware of the volume of data that I was dealing with. I fully expect that when running this module with real data, it is going to be painfully slow. I am also worried about out of memory errors. Does anyone have suggestions? Thanks!
    Btw... This is using ORACLE 10g

    Depending on the number of rows in the table, I would tend to look at loading the ID's into a temporary table and then using that temporary table in your search, i.e.
    SELECT field1, field2
      FROM table
    WHERE id IN (SELECT id FROM new_temp_table)or
    SELECT field1, field2
      FROM table a
    WHERE EXISTS (
        SELECT 1
          FROM new_temp_table b
         WHERE a.id = b.id )Of course, if the logic for figuring out the 60,000 ID's you want to search on can be expressed as a SQL query, you could replace the temp table in the above examples with that particular SQL expression.
    Justin

  • A much more efficient way to add C Series modules manually

    Scenario
    Suppose that we want to start coding my Real-Time application, but the hardware hasn't arrived yet.
    We can't Discover the chassis + modules, so we need to add modules manually.
    Current editor
    To add N modules, we need to launch this dialog N times:
    Right-click on Chassis in the Project Explorer
    Hover over "New"
    Click "C Series Modules…"
    Click "New target or device"
    Click "C Series Module"
    Click "OK"
    Wait for LabVIEW to fetch module list (wait ~1 second)
    Select Type (2 clicks)
    Select Location (2 clicks)
    Click "OK"
    Go to #1 to add a new module
    How tedious!
    Proposed Editor
    Wouldn't it be nice if we could set up all the modules in 1 dialog?
    Features
    Table auto-fills itself with modules already in the project
    Number of rows is determined by the chassis model. No need to select Location
    Ability to leave rows/slots empty
    Editable Name field (with default name) appears upon selecting Type
    Description appears upon selecting Type
    Feel the difference
    Adding N modules (using default names) requires...
    Current dialog: 10N clicks, N hovers, waiting N seconds
    Proposed dialog*: (6+2N) clicks, 1 hover, waiting 1 second
    So, adding 8 modules requires...
    Current dialog: 80 clicks, 8 hovers, waiting 8 seconds
    Proposed dialog*: 22 clicks, 1 hover, waiting 1 second
    *Assuming that steps 1-7 and 10 need to be performed once
     

    Oh, and if this is implemented, please make sure that users can comfortably fit all text on their screen!

  • Can u suggest  more efficient way

    hi
    i have strings like
    insert into drmandi_trans (drmt_mandi_code,drmt_commodity_code,drmt_qty_arrived,drmt_morning_price_max,drmt_morning_price_min,drmt_evening_price_max,drmt_evening_price_min,DRMT_Tran_date) values ('MD_INMPBPL003','MD_COM_001', 0,0,0,0,0,#8/3/2001#)
    and
    update drmandi_trans set drmt_qty_arrived = 012 ,drmt_morning_price_max = 012 , drmt_morning_price_min = 012 ,drmt_evening_price_max = 012 ,drmt_evening_price_min= 012 where drmt_mandi_code = 'MD_INMPBPL003' and drmt_commodity_code = 'MD_COM_002' and drmt_tran_date =#8/3/2001#
    i want them to be
    insert into drmandi_trans (drmt_mandi_code,drmt_commodity_code,drmt_qty_arrived,drmt_morning_price_max,drmt_morning_price_min,drmt_evening_price_max,drmt_evening_price_min,DRMT_Tran_date) values ('MD_INMPBPL003','MD_COM_001', 0,0,0,0,0,'8/3/2001')
    update drmandi_trans set drmt_qty_arrived = 012 ,drmt_morning_price_max = 012 , drmt_morning_price_min = 012 ,drmt_evening_price_max = 012 ,drmt_evening_price_min= 012 where drmt_mandi_code = 'MD_INMPBPL003' and drmt_commodity_code = 'MD_COM_002' and drmt_tran_date ='8/3/2001'
    i wrote following method for it and pass these strings to it as parameter
    public static String changeForSqlServer(String command)
           StringTokenizer st=new StringTokenizer(command,",=(#)",true);
              StringBuffer sqlServerVersion=new StringBuffer();
              String previousToken="";
              String nextToken="";
              String token="";
              while(st.hasMoreTokens()) 
                   token=st.nextToken().trim();
                   System.out.println("01 p->"+previousToken+"t->"+token+"n->"+nextToken);                         
                   if(token.equals("#"))
                        if(previousToken.equals(",")||previousToken.equals("(")||previousToken.equals("="))
                             System.out.println("1. p->"+ previousToken+"t->"+token+"n->"+nextToken);
                             sqlServerVersion.append("'");
                             previousToken=token;
                             System.out.println("2. p->"+ previousToken+"t->"+token+"n->"+nextToken);
                        else
                             if(st.hasMoreTokens())
                                       nextToken=st.nextToken().trim();
                                       System.out.println("3. p->"+ previousToken+"t->"+token+"n->"+nextToken);
                                       if(nextToken.equals(",")||nextToken.equals(")"))
                                       System.out.println("4 p->"+previousToken+"t->"+token+"n->"+nextToken);                         
                                            sqlServerVersion.append("'").append(nextToken);
                                       System.out.println("5 "+previousToken+"  "+token+"  "+nextToken);                              
                                       else
                                            sqlServerVersion.append(token).append(nextToken);
                                       previousToken=nextToken;
                                       System.out.println(sqlServerVersion);
                             else
                                  sqlServerVersion.append("'");
                                  return sqlServerVersion.toString();
                   else
                        if(!token.equals(""))
                             previousToken=token;
                             sqlServerVersion.append(token);
              return sqlServerVersion.toString();
      }although this method does above task i feel that there may be many improvements possible to make it fastest algorithm.
    it will be helpful if u can suggest improvements in this code
    or suggest some better algorithms
    thanks

    Could you simplify what excactly your trying to acheive here? Form the look of it, your doing the job of the JDBC. You should consider reading the JDBC doccumentation or buying a book. You should check for the PreparedStatements wich allow you to set predefined Statements with Variable inputs fields.
    Check out :
    http://java.sun.com/j2se/1.3/docs/guide/jdbc/index.html

  • More cost efficient way??

    Any suggestions as to a more efficient way to write the following statement?
    SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
    PR.VEHICLEID P01,
    PR.OCCNUMBER P02,
    PR.PERSONTYPEID P03,
    NM.STRIKEVEHICLEID P22
    FROM NASS.PARDATA PAR,
    GES.CRASHDATA CD,
    GES.PERSON PR,
    GES.NONMOTORIST NM
    WHERE PAR.PARID=CD.PARID AND
    CD.PARID=PR.PARID AND
    PR.PARID=NM.PARID (+) AND
    PR.VEHICLEID=NM.VEHICLEID (+) AND
    PR.OCCUPANTID=NM.OCCUPANTID (+) AND
    ((PR.PERSONTYPEID IN (26706,26707,26708,26709,26710) AND
    (NM.STRIKEVEHICLEID<1 OR
    NM.STRIKEVEHICLEID IS NULL)) OR
    (PR.PERSONTYPEID IN (26704,26705,26711) AND
    NM.STRIKEVEHICLEID>0))
    ORDER BY 1,2,3,4,5,6;

    I would try this
    SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
    PR.VEHICLEID P01,
    PR.OCCNUMBER P02,
    PR.PERSONTYPEID P03,
    NM.STRIKEVEHICLEID P22
    FROM NASS.PARDATA PAR,
    GES.CRASHDATA CD,
    GES.PERSON PR,
    GES.NONMOTORIST NM
    WHERE PAR.PARID=CD.PARID AND
    CD.PARID=PR.PARID AND
    PR.PARID=NM.PARID (+) AND
    PR.VEHICLEID=NM.VEHICLEID (+) AND
    PR.OCCUPANTID=NM.OCCUPANTID (+) AND
    PR.PERSONTYPEID BETWEEN 26706 AND 26710 AND
    NVL(NM.STRIKEVEHICLEID,0)<1
    UNION ALL
    SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
    PR.VEHICLEID P01,
    PR.OCCNUMBER P02,
    PR.PERSONTYPEID P03,
    NM.STRIKEVEHICLEID P22
    FROM NASS.PARDATA PAR,
    GES.CRASHDATA CD,
    GES.PERSON PR,
    GES.NONMOTORIST NM
    WHERE PAR.PARID=CD.PARID AND
    CD.PARID=PR.PARID AND
    PR.PARID=NM.PARID (+) AND
    PR.VEHICLEID=NM.VEHICLEID (+) AND
    PR.OCCUPANTID=NM.OCCUPANTID (+) AND
    PR.PERSONTYPEID BETWEEN 26704 AND 26705 AND
    NM.STRIKEVEHICLEID>0
    UNION ALL
    SELECT 'P22-RANGE-1 ', PSU, CD.PARID, CD.LASTUPDATEDATE,
    PR.VEHICLEID P01,
    PR.OCCNUMBER P02,
    PR.PERSONTYPEID P03,
    NM.STRIKEVEHICLEID P22
    FROM NASS.PARDATA PAR,
    GES.CRASHDATA CD,
    GES.PERSON PR,
    GES.NONMOTORIST NM
    WHERE PAR.PARID=CD.PARID AND
    CD.PARID=PR.PARID AND
    PR.PARID=NM.PARID (+) AND
    PR.VEHICLEID=NM.VEHICLEID (+) AND
    PR.OCCUPANTID=NM.OCCUPANTID (+) AND
    PR.PERSONTYPEID = 26711 AND
    NM.STRIKEVEHICLEID>0
    ORDER BY 1,2,3,4,5,6;

  • SQL query with multiple tables - what is the most efficient way?

    Hello I am learning PL/SQL. I have a simple procedure where I need to find number of employees and departments per location as per user input of location_id.
    I have 3 Tables:
    LOCATIONS
    location_id (pk)
    location_name
    DEPARTMENTS
    department_id (pk)
    location_id (fk)
    department_name
    EMPLOYEES
    employee_id (pk)
    department_id (fk)
    employee_name
    1 Location can have 0-MANY Departments
    1 Employee has 1 Department
    Here is the query I came up with for PL/SQL procedure:
    /*Ecount, Dcount are NUMBER variables */
    SELECT SUM (EmployeeCount), COUNT(DepartmentNumber)
         INTO Ecount, Dcount
         FROM     
         (SELECT COUNT(employee_id) EmployeeCount, department_id DepartmentNumber
              FROM employees
              GROUP BY department_id
              HAVING department_id IN
                        (SELECT department_id
                        FROM departments
                        WHERE location_id = userInput));
    I do get the correct result, but I am just wondering if my query is on the right track and if there is a more "efficient" way of doing this.
    Thanks in advance for helping a newbie out.

    Hi,
    Welcome to the forum!
    Something like this will be more efficient:
    SELECT    COUNT (employee_id)               AS ECount
    ,       COUNT (DISTINCT department_id)     AS DCount
    FROM       employees
    WHERE       department_id IN (     SELECT     department_id
                        FROM      departments
                        WHERE      location_id = :userInput
    ;You should also try a join instead of the IN subquery.
    For efficiency, do only the things you need to do.
    For example, you don't need a count of employees in each department, so don't compute one. That means you won't need the in-line view, so don't have one.
    You don't need PL/SQL for this job, so don't use PL/SQL if you don't have to. (I realize this question was out of context, so you may have good reasons for doing this in PL/SQL.)
    Do all filtering as early as possible. Don't waste effort computing things that won't be used .
    A particular example of this is: Never use a HAVING clause when you can use a WHERE clause. What's the difference between a WHERE clause and a HAVING clause? The WHERE clause is applied before aggregate functions are computed, and the HAVING clause is applied after; there's no other difference. Therefore, if the HAVING clause isn't referencing an aggregate function, it could be done in a WHERE clause instead.

  • Best way to find and copy files

    If i have a list of 2000 file names and want to find:
    if they exist in folder x
    then copy to folder y
    else
    output an error messege to txt file.
    What would the most efficient way to do this be? So far i've used listFiles() and put a >HUGE< directory listing into an Array. Then I did a comparison for each file name in the list against the array. I know this cant be the most efficient way. I posted earlier in the board but for some reason NO one answered. Can anyone suggest and easier way to do this? and what is the best way to copy files in java? opening a bufferwriter etc? Essectially, im doing a restore from multiple backup directories to the original directory. Please help me if you can take the time.
    Thank you,
    jon

    If i have a list of 2000 file names and want to
    find:
    if they exist in folder x
    then copy to folder y
    else
    output an error messege to txt file.
    What would the most efficient way to do this be?In a loop just read and write each file. If it does not exist, print an error.
    So far i've used listFiles() and put a >HUGE< directory listing into an Array. A directory with a million entries is a HUGE directory, I guess you only have a few thousand.
    I wouldn't bother reading the files into memory, you only need 2000 of them.
    Then I did a comparison for each file name in the list against the array. The OS will do this for you when you try to open the file, which you will have to do anyway to copy it.
    I know this cant be the most efficient way. More efficient way, don't do it at all.
    I posted earlier in the board but for some reason NO one answered. Perhaps the problem seemed too obvious.
    Can anyone suggest and easier way to do this?
    and what is the best way to copy files in java? Copy one at a time.
    opening a bufferwriter etc? Read a block of say 64K at a time. No buffers required.
    Essectially, im
    doing a restore from multiple backup directories to
    the original directory. Please help me if you can
    take the time.If you are recovering from a backup, the most important thing is ensuring the data is correct and valid, speed is less important. (It is no good if it is fast but corrupt)
    Coping 2000 files is going to be only as fast as your drive(s) can handle. How you copy the file is less important.

  • More effecient way of using Decode and CASE

    Is it possible to use both case and decode together? I feel I have too many decode and I am not sure how to write this in a more efficient way.
    I would really appreciate any help.
    Thank you so much in advance!!!
    SELECT  Person_ID,Work_ID,
        CASE WHEN NBR = 1 THEN
                MIN(DECODE(NBR, 1, field1)) aug_one_field1;
                MIN(DECODE(NBR, 1, field2)) aug_one_field2;
                MIN(DECODE(NBR, 1, field3)) aug_one_field3;
                MIN(DECODE(NBR, 1, field4)) aug_one_field4;
                MIN(DECODE(NBR, 1, field5)) aug_one_field5;
                MIN(DECODE(NBR, 1, field6)) aug_one_field6;
        CASE WHEN NBR = 2 THEN
                MIN(DECODE(NBR, 2, field1)) aug_two_field1;
                MIN(DECODE(NBR, 2, field2)) aug_two_field2;
                MIN(DECODE(NBR, 2, field3)) aug_two_field3;
                MIN(DECODE(NBR, 2, field4)) aug_two_field4;
                MIN(DECODE(NBR, 2, field5)) aug_two_field5;
                MIN(DECODE(NBR, 2, field6)) aug_two_field6;
        CASE WHEN NBR = 3 THEN
                MIN(DECODE(NBR, 3, field1)) aug_three_field1;
                MIN(DECODE(NBR, 3, field2)) aug_three_field2;
                MIN(DECODE(NBR, 3, field3)) aug_three_field3;
                MIN(DECODE(NBR, 3, field4)) aug_three_field4;
                MIN(DECODE(NBR, 3, field5)) aug_three_field5;
                MIN(DECODE(NBR, 3, field6)) aug_three_field6;
        CASE WHEN NBR = 4 THEN
                MIN(DECODE(NBR, 4, field1)) aug_four_field1;
                MIN(DECODE(NBR, 4, field2)) aug_four_field2;
                MIN(DECODE(NBR, 4, field3)) aug_four_field3;
                MIN(DECODE(NBR, 4, field4)) aug_four_field4;
                MIN(DECODE(NBR, 4, field5)) aug_four_field5;
                MIN(DECODE(NBR, 4, field6)) aug_four_field6;
    END
       FROM (SELECT Person_ID,Work_ID, NBR
                   fiel1, field2, field3,field4,field5, field6,
                  FROM field_rep
       GROUP BY Person_ID,Work_ID

    Thanks alot John and Frank.
    John to answer your question, I just felt the 24 or more decode will slow down the systems performance, hence I was trying to find a better way to do it.
    Frank
    This is a sample data but I want it pivoted hence the reason for using the decode. I have oracle 10g. These are sample data
    Person_id work_id NBR      field1     field2     field3     field4 field5 field6
    1       ao334   1     1/2/2009   1/9/2010     block_A        HH       55667      1
    1       ao334   2     5/2/2011   9/9/2013     block_Z        HL       11111      3
    1       ao334   2     1/2/2009   1/9/2010     block_A        HH       22222      1
    1       ao334   4     1/2/2009   1/9/2010     block_A        HH       zzzzz      7
    1       z5521   1     10/5/2006  12/31/2012     block_C        SS       33322      1
    1       z5521   2     1/2/2009   1/9/2010     block_C        SS       21550      1
    1       z5521   3     1/2/2009   1/9/2010     block_R        SS       10000      1
    1       z5521   4     1/2/2009   1/9/2010     block_D        SS       99100      5
    1       z5521   5     1/2/2009   1/9/2010     block_P        SS       88860      1
    1       z5521   6     1/2/2009   1/9/2010     block_G        SS       99660      8
    1       ob114   1     1/2/2009   1/9/2010     block_A        HH       52333      1
    1       ob114   2     1/2/2009   1/9/2010     block_A        HH       88888      1Output will look like this
    Person_id work_id  aug_one_field1 aug_one_field2 aug_one_field3 aug_one_field4 aug_one_field5 aug_one_field6 aug_two_field1 aug_two_field2 aug_two_field3 aug_two_field4 aug_two_field5  aug_two_field6
    1       ao334        1/2/2009       1/9/2010       block_A       HH             55667          1              5/2/2011       9/9/2013       block_Z        HL                      11111          3

  • EFFICIENT way of escalating an open task

    I need to escalate TASKS that are still open after 31 days.
    I figure i need 2 workflows to do this.
    As i see it right now:
    1st WF. Waits for 31 days after the task has been created. On the 31st day it changes a read only field called "escalate" to YES.
    2nd WF checks for changes in tasks where: If (Status=OPEN AND escalate<>pre(escalate)) is true then send an escalete email or task.
    Is there a more efficient way of doing this?
    TIA
    Paul

    Is there a reason you want two worfklows? Why not put an e-mail action after the Wait on the same workflow? If you check the "Reevaluate Rule Conditions After Wait" checkbox on the Wait action, the workflow rule will be re-evaluated after your 31 days... so it would only send the e-mail message if the Task is still open (assuming your workflow condition is set to look at Status = Open).
    Chris

Maybe you are looking for

  • How can I get a digital WDT that includes all samples, not just the one for the current time step...?

    See block labeled ''digital data'' in my attachment for reference. Currently, only the digital data point for the current time step can be seen (it is deleted before the next one appears). However, I would like it display all the samples in the table

  • Media Performance Manager

    I exported my 1920x1080 ProRes HQ sequence as a Quicktime movie (using Export>QT Movie, Use current settings, make self-contained) Takes about 30 minutes to export the 95 minute show, but when I bring it back into FCP I get the error that the clip is

  • Can I use Bridge to view files in Google Drive?

    Is there any way to use Bridge to view files that are in a shared Google Drive folder?

  • Graph between cursors

    Hi,  I have a problem with cursors. I have a waveform graph with some signal and two cursors. (annex 1) I would like to see a signal which is between the two cursors on the second waveform graph. Thank you for your help Solved! Go to Solution. Attach

  • Template Files not Updating

    Hey guys, My laptop was infected with a virus last week and I had to get it rebuilt. I have since lost my installed version of Dreamweaver MX and I can't find my serial code so at the moment I have installed the trial version of the latest Dreamweave