Records Retreiving In Hashtable

I got problem regarding hashtable..
Actually i wrote a query which gets all results in descending order and i am putting these values in to Hashtable .while Enumerating(retreiving) i am not getting the records orderly as i had set in Hashtable.
Could you please tell anybody what is the problem ?
for this i used following code :
import java.util.* ;
import java.sql.* ;
public class HashTest {
     public static void main(String args[]) throws Exception {
          String str ;
          Hashtable hashtable = new Hashtable() ;
          Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
          Connection conn = DriverManager.getConnection("jdbc:odbc:datasourcename","userid","password");
          Statement stmt = conn.createStatement();
          String query = "select to_number(rtrim(ltrim(id))) , name from users order by 1 desc" ;
          ResultSet rset = stmt.executeQuery(query);
          while(rset.next()) {
               hashtable.put(rset.getString(1),rset.getString("name"));
          Enumeration enum = hashtable.keys() ;
          while(enum.hasMoreElements()) {
               str = (String)enum.nextElement();          
               System.out.println(str + " : " + hashtable.get(str));

There is a very good reason for this. There is no ordering in a Hashtable, so there is no way for the enumeration to know the order as it is not maintained. In any case there is no guarantee that an enumeration will respect the ordering even if there was one.
Sorry I couldn't be more help.
PS. I noticed you posted this three times, I assume by mistake as they are posted within a few minutes of each other but be more careful in the future.

Similar Messages

  • How to count the number of records retreived through a query?

    Dear All
    I want to find the total number of records retreived for a particular query .
    does BW provide any internal count function , which can solve my requirement??
    if yes, please provide some details.
    Thanks.
    Regards,
    Pandurang.

    hi pandurang
    when u see the contents of a particular cube
    RSA1->Infocube->manage->contents
    there is a option "Output number of Hits"
    An extra keyfigure is created displaying the number of records rolled up
    Aggregating the column might just solve ur issue
    Try
    Regards
    Akshay

  • Only put the last record retreived into the item form

    Hi All,
    I am trying to show all the retreived records of a query in a multi-line formTEXT item. However, it seems to me that only the last record shows rather than all records. The code is below where DM_AR is a package and OPEN_R_TYPE, Fetch_data are two procedures which uses a cursor variable. The below code runs perfectly in SQLPLus if we replace the :RESULT_TEXT := .., with a DBMS_output.put_line build-in procedure.
    Any suggestion Plz......
    F
    DECLARE
    -- declare a cursor variable
    RES DM_AR.ruleitem_type;
    TEMP_row TEMP_TBL%ROWTYPE;
    BEGIN
    -- open the cursor using a variable
    DM_AR.OPEN_R_TYPE(RES, :MODEL_NAME);
    -- fetch the data and display it
    LOOP
    DM_AR.Fetch_data(RES, TEMP_ROW);
    EXIT WHEN RES%NOTFOUND;
    :RESULT_TEXT := 'RuleNo: ' || TEMP_ROW.RULE_ID || ' Conf: '|| TEMP_ROW.R_CONF || 'Supp: ' || TEMP_ROW.R_SUPP || ' IF: ' || TEMP_ROW.ANTECEDENT || ' THEN: ' || TEMP_ROW.CONSEQUENT;
    END LOOP;
    END;

    You are overwriting your item :RESULT_TEXT everytime in the loop. So, you only see the final result. You probably want this:
    :RESULT_TEXT := :RESULT_TEXT || ' RuleNo: ' || ...
    ...

  • How to get count(*) in ABAP Query...

    Hi All,
    Can someone of you tell me, how to do the following in the ABAP Query. I want to get the count of records retreived during the query execution and display it in the output.
    example:::   Select count(*) from VBRK.
    Thanks a lot.
    Thanks!
    Puneet.

    From help doc:
    Note
    The SELECT COUNT( * ) FROM ... statement returns a result table containing a single line with the result 0 if there are no records in the database table that meet the selection criteria. In an exception to the above rule, SY-SUBRC is set to 4 in this case, and SY-DBCNT to zero.
    You can just run SELECT COUNT (*) FROM TABLE
    Number of rows is returned in SY-DBCNT
    Edited by: Kevin Lin on Jul 2, 2008 10:55 PM

  • How do I convert a Vector to be passed into the session

    Hi I have a vector in a JSP that I need to pass on to a second JSP. I use the following command to do that just as I pass normal other strings:
    session.setAttribute("MyVector",vecData);
    But the follwoing error is been thrown by Tomcat.
    Incompatible type for method. Can't convert void to java.lang.Object.
    out.print(session.setAttribute("ResultsSet",set));
    ^
    I guess I have to convert it explicitly into an object and then how do I do that? could someone please help me fast on this?
    if this doesn't work is there some other way to send this vector to my other page? I am currently sending enough strings using the above method, but for Vectors it doesn't seem to work though I am told that a session object can carry any object unlike the request object that carries only strings. Please help!!!!

    Hi Calin thank you so much for taking ur time to help me with my problem.
    well let me explain my requirement fully in detail. JSP1 has a form on it for the user to fill in. its in the form of a table with multiple rows. say as in entering product details one after the other in an invoice.
    I want each of these rows taken in and passed into JSP2.
    What I do is pack each row of the table into a hashtable using a 'for' loop.
    Then I add each record (within the 'for' loop) to a Vector. After the 'for' loop, outside of it, I set this Vector to the session as :
    session.setAttribute("RecordSet",vecRecordSet);
    Well let me copy paste and extract of the coding from my source which shows what I have done.
    Vector set=new Vector();
    for (int i=0;i<Integer.parseInt(request.getParameter("NoOfRecords"));i++)
    Hashtable Record=new Hashtable();
                   Record.put("ReminderStatus",ReminderStatus);
                   Record.put("CustomerNo",CustomerNo);
                   Record.put("CustomerName",CustomerName);
                   Record.put("Address",address);
                   Record.put("ContractNo",ContractNo);
                   Record.put("MachineID",MachineID);
                   Record.put("ModelNo",ModelNo);
                   Record.put("Description",description);
                   Record.put("SerialNo",SerialNo);
                   Record.put("ContractValue",ContractValue);
                   Record.put("NoOfServices",NoOfServices);
                   Record.put("ContractStartDate",ContractStartDate);
                   Record.put("ContractEndDate",ContractEndDate);               set.add(Record);
         session.setAttribute("ResultsSet",set);//SHERVEEN
    well don't worry about the for loop condition up there cos it works for the main functionality that exists within the for loop. but what seems to go wrong is when I add records to the hashtable and when I pack it to the Vector "set"
    I hope I have not done something wrong as a principle over here.
    And btw the error that I showed u before, is pointed at a row which exists in the file that is generated when compiled. the out.println part is generated in that file during compilation and I don't know why.
    I hope I have given u information for u to make some sense of my requirement and thank u a million once again for ur effort to help me out. I am grateful to all of u.

  • What is the best way to explore a hierarchical folder structure?

    Hallo,
    I need to access and navigate a hierarchical folder structure hosted in a MS SQL Server database. In particular there is a root folder containing several folders. Each child-folder contains further nested folders or documents.
    For each item I need to retrieve the folder's (name, path, etc) and the documents (title, author, etc.) details that are retrievable from the DB fields. Afterwards I will use these data to create a semantic web ontology using Jena API.
    My question was about which is the best way to proceed.
    A collegue of mine suggested to use the "WITH" command of SQL Server to create and use a link list to navigate easily the structure, executing just one query rather than several (one for each level of the nested loops). However in this way the solution will work only with the SMQ Server database, while my goal is to achieve a more general solution.
    May someone help me?
    Thank you in advance,
    Francesco

    My goal is to create a documents library ontology achieving, from each element of the hierarchy (folder or document), some data (title, parent, etc.) and use them to "label" the ontology resources.
    I will use a little of both approches in the following way:
    1) I make just ONE query on folder table to get, from each folder, its path (eg. root/fold1/fold2/doc1.pdf), its ID and ParentID and ONE on the Documents table to get the containerID, title, etc.
    2) I create as many Folder objects as the retrieved records and an HashTable, where the KEY = Folder.ParentID value and the VALUE = Vector<Folder>. I add then each object to the Vector relative to the same ParentID. In this way I have an Vector containing all the folders child of the same parent folder and I do the same for an HashTable keeping the documents contained in a specific folder.
    3)I extract from the HashTable the root folder (whose ParentID is always "0" and whose ID is "1") than it is invoked the method appendChild() (see code)
         public static void appendChild(int ID, Resource RES)
              Vector<Folder> currFold = table.get(ID);
              for(int i=0; i<currFold.size(); i++)
                   //Extract the child and crate the relative resource
                   if(table.containsKey(currFold.getID()))
                        appendChild(currFold[i].getID(), Resource newRES);
    In this way I go in depth in the hirarchical structure using a "left most" procedure. I made a test and the output is correct. However, such an approch must be done for about 4 level depth folders (around 30 in total) containing also documents and creating the documents library of a Project. Then I must process around 20 project to achieve for all of them such a documents library representation.
    By the way, I do not have to mantein the HashTable content after I created the docs library ontology. Hence I use just one hashTable for ALL the projects and I flush it after I finish to do the loop for one project in order to save resources.
    My question is: is right my approach or might I improve it in some way?
    Thank you for every suggesion/comment.
    Francesco
    Edited by: paquito81 on May 27, 2008 8:15 AM                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Hashing blows

    Question 1: Output the average number of collisions that occur when:
    number of records = 8
    size of hashtable = 16
    Calculate the average over 10 executions of the code
    Question 2: What happens if the number of records > size of hashtable?
    Can you fix this?
    Question 3: Output the average number of collisions that occur when:
    the number of records = 20
    for hashtables of sizes 20, 25, 30, 35, and 40
    Calculate this average over 10 executions of the code
    Write the output as a table of 21 integer values

    Hi Brada,
    As fun as it is to do other peoples homework I think you should try and do this yourself.
    Try looking on wikipedia for some info about hasing,
    http://en.wikipedia.org/wiki/Hash_map
    Good luck
    WF

  • Structure specific property on Infocube

    Hi Gurus:
    Does 'Structure specific property on Infocube' will help in query performance?
    I am thinking it only helps in data loading.
    If you think it helps in query perfromance, please suggest & send a link to the SAP documentation.
    I will be more than happy to assign points.
    Thanks

    Hi M,
    The structure specifc property is basically only useful in a case where your query is defined for a fixed set of values.
    Suppose you have fixed the value of the charateristic to some constant.
    Thus it can act as a performance improver in the case if you are using the characteriastic in the query and your query is using only for that set of values .
    Then the number of records retreived for that query will be less and perfornmance will defenately improve in that case as the number of reocrds transferred will be less .
    But the draw back is that you will not be able to put any restriction  on those characteristics in the query designer and no drilldown is possible on such characteristics.
    Hope it is clear.

  • Method required to calling file/shellscript from XI

    Hi All,
    My scenario is to read multiple records from a stored procedure in DB2 and creating one file for each record retreived. i use a NFS and add timestamp for each file i write. i have completed this part. Now, i will have to call an external executable file or shell script with each file i create in the target directory. I read the blog
    <a href="/people/sameer.shadab/blog/2005/09/21/executing-unix-shell-script-using-operating-system-command-in-xi Unix shell script using Operating System Command in XI</a> which tells about passing the Absolute path of the target file as parameter while calling a shellscript.
    My question is when i am appending a timestamp to the target file, how can i call the shellscript using the Operating System Command.
    And using the Operating System Command, can i call only a shell script or an any sort of executable file as well?
    Appreciate any response that helps.
    Thank you,
    Regards,
    Balaji.M

    Hi Archana,
    My question is when i am appending a timestamp to the target file, how can i call the shellscript using the Operating System Command.
    Say -
    Target Directory = /home/xd1/folderB
    File Name = Test.xml
    File Construction Mode = Add Time Stamp
    Operating System Command = /home/xd1/executables/runthis.sh %F
    Now, the output files created are this sequence (after appending timestamp to file name)
    Test20070111-141338-376.xml
    Test20070111-141343-213.xml
    Test20070111-142958-615.xml
    My Question meant:
    would only the <b>/home/xd1/folderB/Test.xml</b> be passed runthis.sh
    or
    Test20070111-141338-376.xml be passed runthis.sh - first
    Test20070111-141343-213.xml be passed runthis.sh - second
    Test20070111-142958-615.xml be passed runthis.sh - third
    in short, would the output file be passed to the shellscript after adding the timestamp to the filename?
    Regards,
    Balaji

  • ItemRenderer doesn't show color bg.  [see code]

    Hello,
    I've been trying to get an itemRenderer within a <skinnableDataContainer> to have color in it's background.  The itemRenderer is a pair of labels that repeats over a set of records (retreived remotely) in the dataContainer.  I'm using a tileLayout with 7 columns so all the data (each itemRenderer) appears tiled.  Everything works fine except the itemRenderer will not show background colors for the normal and hovered states (it defaults to white).  Basically I want each repeating itemRenderer object within the container to have a background color.
    Application:
    <s:SkinnableDataContainer itemRenderer="components.tagoutInfo"
         creationComplete="list_creationCompleteHandler(event)"
         id="skinnableDataContainer"
         dataProvider="{getAlluserTableResult.lastResult}">
         <s:layout>
              <s:TileLayout requestedColumnCount="7"
                   orientation="rows"
                   horizontalGap="1"/>
         </s:layout>
    </s:SkinnableDataContainer>
    itemRenderer:
    <s:ItemRenderer name="tagoutInfo"
         xmlns:fx="http://ns.adobe.com/mxml/2009"
         xmlns:s="library://ns.adobe.com/flex/spark"
         xmlns:mx="library://ns.adobe.com/flex/mx"
         autoDrawBackground="false" fontSize="10">
         <s:layout>
              <s:VerticalLayout horizontalAlign="center" paddingBottom="4"/>
         </s:layout>
         <s:states>
              <s:State name="normal"/>
              <s:State name="hovered"/>
         </s:states>
         <s:Rect id="myRect"
              left="0" right="0" top="0" bottom="0"
              alpha="1.0">
              <s:stroke>
                   <s:SolidColorStroke color="black"
                        weight="1"/>
              </s:stroke>
              <s:fill>
                   <s:SolidColor color.normal="red" color.hovered="green"/>
              </s:fill>
         </s:Rect>
         <s:Label text="{data.firstName + ' ' + data.lastName}"/>
         <s:Label text="{data.reasonOut + ' ' + data.timeBack}"/>
    </s:ItemRenderer>

    Hi,
    It looks like what's happening is that your background Rect is being laid out vertically with the labels. You'll want to put the labels into a separate Group that has the defined vertical layout. This way, the Rect will be drawn behind the group of labels. Let me know if you have any more trouble.
    Thanks,
    -Kevin

  • Profile WS run time too long

    My first test run (in dev) of the custom Java PWS against a SQL server database seems to be taking too long. Its seems to be taking appx. 1 second or more per user. This will be a problem when processing around 20,000 users. We can't have a PWS running for 4-6 hours everynight.
    I'd really appreciate any tips to optimize this for faster performance. I am opening DB Connection and running 2 queries for each users and clsoing the statement and connection. Can I do this another way so I open the DB connection(and maybe statement) for the Profile Source only once and release the connection etc. after the whole job is done or in case of errors.
    Please help as this could be a huge bottleneck for us!
    Thanks.
    Vanita
    Staples

    Hi Akash,
    Thanks for your quick reply. Unfortunately I don't have any signature field to limit the profile sync by. I am currently running the PWs in ToMCAT 4.1.30. We plan to deploy on WAS in dev. I am not sure if there is connection pooling being done by Tomcat in 4.1.30 version. Do you have any information on that? I am doing db connect and 2 property queries in the getuserproperties method.
    1. Would the connection pooling (if done in WAS) make any difference to the run time?
    2. Would opening the connection in initialize() method versus GetUserProperties() make any difference?
    As always, thanks for your help.
    Vanita
    ------- Akash Jain wrote on 3/1/05 1:25 PM -------Hi Vanita,On recent hardware, you should be able to perform an initial profile sync at at a rate of ~10/second. This means you should be able to perform your 20k users profile sync in under an hour. Resyncs should be much faster if you use a signature attribute.
    I'm going to assume you're hitting some database backend with a table structure like the following:Users Table String UserGUID Date LastModified
    Properties Table int PropID String UserGUID String PropValue
    You have your users keyed off a unique name - a GUID in this example - and properties in a seperate table keyed off PropID and GUID.
    Lets review the protocol and each step:a) initialize() - sends the parameters of the profile sync to the PWS, this is a good place to do a single query to your database in order to cache all user unique names and signatures (LastModified dates) in a HashTable. This will make re-syncs much faster since subsequent AttachToUser() and GetUserSignature() calls will be derived from this HashTable.b) attachToUser() - in this call you can simply lookup your user record against the HashTable created in Initialize(). If no entry is returned, then throw a NoSuchUserException. If a user does exist continue.c) getUserSignature() - again use the HashTable created in Initialize() to lookup the signature for this user. return it as a String.d) getUserProperties() - if called, this means the signature you sent back in step (c) has changed since the last profile sync. you now want to make a call to your properties table (a single DB call) to load all the property values for the user. return these as a UserPropertyInfo object.
    During an initital sync, you will always get to step (d) above. During re-syncs, assuming there is low churn, I'd say a max of 1% of your calls will get to step (d) and thus the re-sync should be an order of magnitude faster in most cases.
    With respect to database connections - if you are opening and closing a connection for each user, this is pretty poor with respect to performance. Your best bet in Java is to use a single connection (this is a single threaded process) which is setup in initialize(). In the shutdown() method, close this connection.
    I hope this helps, the combination of using a single connection, using the signature attribute and caching all the users unique names and signatures in one call at the start of the profile sync should drastically increase performance.
    Thanks,Akash

  • Cannot retreive table records.

    Hi
    When I am trying to retreive record from the table I am getting following error. But the exist in the database. can anyone help me on this.
    SQL> SELECT * from kdev199.calls_external where rownum<2;
    SELECT * from kdev199.calls_external where rownum<2
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    error opening file /test/kclaims/xcom/CALLS_EXTERNAL_24536.log

    Hi find the output below.
    SQL> SELECT DBMS_METADATA.GET_DDL('TABLE','CALLS_EXTERNAL','KDEV199') FROM DUAL;
    DBMS_METADATA.GET_DDL('TABLE','CALLS_EXTERNAL','KDEV199')
    CREATE TABLE "KDEV199"."CALLS_EXTERNAL"
    ( "CALL_ID" NUMBER,
    "CALL_DATE" DATE,
    "EMP_ID" NUMBER,
    "CALL_TYPE" VARCHAR2(12),
    "DETAILS" VARCHAR2(25)
    ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
    DEFAULT DIRECTORY "EXTERNAL_DIR_CLMXCOM"
    DBMS_METADATA.GET_DDL('TABLE','CALLS_EXTERNAL','KDEV199')
    ACCESS PARAMETERS
    ( records delimited by newline
    LOGFILE 'calls.log'
    fields terminated by ','
    missing field values are null
    call_id, call_date char date_format date mask
    "mm-dd-yyyy:hh24:mi:ss",
    emp_id, call_type, details
    DBMS_METADATA.GET_DDL('TABLE','CALLS_EXTERNAL','KDEV199')
    LOCATION
    ( 'calls.dat'
    SQL> SELECT count(*) from kdev199."calls_external"
    2 ;
    SELECT count(*) from kdev199."calls_external"
    ERROR at line 1:
    ORA-00942: table or view does not exist
    SQL> SELECT count(*) from kdev199.calls_external
    2 ;
    SELECT count(*) from kdev199.calls_external
    ERROR at line 1:
    ORA-29913: error in executing ODCIEXTTABLEOPEN callout
    ORA-29400: data cartridge error
    error opening file /test/kclaims/xcom/calls.log

  • What is the condition to retreive the records between two dates

    Hi,
         I will give START_DATE and END_DATE for a program to retreive the records between these two dates.
        how to write the select query to retreive the records between these two dates.?
        suppose i want to retreive NACHN, VORNA from PA0002.
    then how to write query to retreive nachn, vorna between START_DATE and END_DATE.

    hi,
    Types:begin of ty_pa0002,
              NACHN type PAD_NACHN,
              vorna  type PAD_VORNA,
              end of ty_pa0002.
    Data:it_pa0002 type table of ty_pa0002.
    Parameters:start_date type begda,
                      end_date type  ENDDA .
    select NACHN
              vorna
    from pa0002
    into table it_pa0002
    where begda = start_date
    and  ENDDA = end date.
    Regards,
    Shiva.

  • Getting OOM error while retreive 2lakhs record from DB

    I have a java program with a simple Select query that fetches around 2lakhs record from DB and will display in a JSP Page.
    When I try to fetch huge amount of data the server causing OOM.
    I got few suggestions to add setFetchSize(1000) to my Statement.
    I am confused here, whether i need to setFetchSize() to Statement or to Resulset.
    Pls suggest me a asolution to retrieve all the records and to avoid OOM error.

    66bdf8de-2e37-445a-8436-0ad04a325040 wrote:
    Please find my answers below
      What is a lakh?
          A lakh or lac is a one hundred thousand
    someone looking at a JSP page really want to crawl through a result set that large    
         Yes, All the records will be displayed in a JSP Page and User will be navigating through all the records.
    Seriously?  Some human being is going to navigate through a page containing 200,000 records?  Seriously?
    I tested with both the scenario(setFetchSize() to Statement or to Resulset), it didnt work for me, still getting same OOM .
    Then it would appear that the answer to your question "whether i need to setFetchSize() to Statement or to Resulset" is .... "neither".

  • Retreiving a set of records

    Hello
    I'm a newbie and i have to make a procedure that can return a set of records , in Transact SQL I make using temporal tables , is possible to do the same in Oracle ? Thanks

    The procedure would look something like below. The other statements in the code are just to help test the actual procedure.
    Please post more details so we could suggest solution for your exact needs.
    SQL> variable rc refcursor
    SQL>
    SQL> create or replace procedure get_rows (rc OUT sys_refcursor) is
      2  begin
      3    open rc for
      4    select * from scott.emp ;
      5  end ;
      6  /
    Procedure created.
    SQL> exec get_rows(:rc) ;
    PL/SQL procedure successfully completed.
    SQL> print rc
         EMPNO ENAME      JOB              MGR HIREDATE           SAL       COMM     DEPTNO
          7369            CLERK           7902 17-DEC-1980       1200                    20
          7499            SALESMAN        7698 20-FEB-1981       1600        304         30
          7521            SALESMAN        7698 22-FEB-1981       1250        504         30
          7566            MANAGER         7839 02-APR-1981       2975                    20
          7654            SALESMAN        7698 28-SEP-1981       1250       1404         30
          7698            MANAGER         7839 01-MAY-1981       2850                    30
          7782            MANAGER         7839 09-JUN-1981       2450                    10
          7788            ANALYST         7566 19-APR-0087       3000                    20
          7839            PRESIDENT            17-NOV-1981       5000                    10
          7844            SALESMAN        7698 08-SEP-1981       1500          4         30
          7876            CLERK           7788 23-MAY-0087       1100                    20
          7900            CLERK           7698 03-DEC-1981        950                    30
          7902            ANALYST         7566 03-DEC-1981       3000                    20
          7934            CLERK           7782 23-JAN-1982       1300                    10
    14 rows selected.
    SQL>

Maybe you are looking for