Duplicate keys for HashMap
Hi,
I have a class I created, call it MyClass. It looks like this:
public String myString;
public final String toString() {
return myString;
public boolean equals(MyClass mc) {
return (mc != null && myString.equalsIgnoreCase(mc.toString()));
}When I use objects of type MyClass as the keys to a HashMap when doing a put(), duplicate keys seem to be allowed. And when I query, knowing that I have an entry for a key, I get null. However, when I change my get() and put() to use MyClass.toString() as the key, everything works fine.
It seems to me that the JVM must be assuming that different instances of objects of type MyClass are different and thus a new key+value is created when "putting" instead of the old value being overwritten. But MyClass has an equals() method, so when putting, why doesn't it tell the JVM that the key already exists?
Thanks,
Jeff
True, I missed that.
Were you thinking that the HashMap is going to use the equals you provide that takes MyClass as an arg? It won't. It uses the one that takes Object, which means you'll be using Object's equals, which compares references, so no MyClass can ever be equal to another.
You need to override equals (with the Object argument) and override hashCode.
Similar Messages
-
Duplicate keys in Hashmap - any alternative?
Hi,
I'm using a HashMap that I have now unfortunately discovered may have duplicate keys & the method I use to call the HashMap only calls one a of the key-value pairs - is there any alternative I can use?
Thanks,
CThis is how I interpret the question.
I'm using a HashMap Ok, he has a HashMap, and a problem with it.
that I have now
unfortunately discovered may have duplicate keys The problem is that he now has found out that he might need to bind several values to one key.
the method I use to call the HashMap only calls one
a of the key-value pairsThat's because only one is stored in the map.
- is there any alternative
I can use?See the link that I posted. -
PT 8.51 Upgrade - How to find duplicate keys for Portal Registry Structure?
I am upgrading peopletools for two applications (CRM 9.00 and FDM 9.00) from the 8.48.10 to 8.51.11 versions.
For CRM, Task 4-15-9 Completing the Peopletools Conversion ran app engine PTUPGCONVERT ran smoothly.
For FDM, I have the following error messages:
The Navigation Pagelet does not exist. Portal = EMPLOYEE. Object = PT_SC_PGT_MAIN_MENU. Collection = PT_PTPP_PORTAL_ROOT (219,2083)
Message Set Number: 219
Message Number: 2083
Message Reason: The Navigation Pagelet does not exist. Portal = EMPLOYEE. Object = PT_SC_PGT_MAIN_MENU. Collection = PT_PTPP_PORTAL_ROOT (219,2083)
Duplicate key. Portal: EMPLOYEE, Obj Name: PT_SC_PGT_MAIN_MENU, Nodename: LOCAL_NODE, URL: s/WEBLIB_PTPP_SC.HOMEPAGE.FieldFormula.IScript_SCP (133,4)
PT_SC_PGT_MAIN_MENU already exists. (96,3)
Copy of the Main Menu Pagelet From PS_SITETEMPLATE to Portal: EMPLOYEE failed. (219,2111)
Message Set Number: 219
Message Number: 2111
Message Reason: Copy of the Main Menu Pagelet From PS_SITETEMPLATE to Portal: EMPLOYEE failed. (219,2111)
I checked table PSPRSMDEFN which does not have an entry for PT_SC_PGT_MAIN_MENU, under the Employee Portal. I tried to migrate the missing Portal Registry object using App Designer but again receive the "duplicate key" error.
So it seems that I have to find the duplicate key and resolve it before I can migrate the missing object.
Anyone know a quick way to figure out what the duplicate keys are?
ThanksI tried several things to find the duplicates with no success.
A couple of workarounds were attempted that resulted in the same "duplicate key" error, including:
a) Re-copying file project PPLTLS84CUR
b) Copy object 'PT_SC_PGT_MAIN_MENU" from Demo
After opening an SR, the successful workaround was to use Data Mover to export from Demo the "EMPLOYEE" portal entries for "PT_SC_PGT_MAIN_MENU" from tables PSPRSMDEFN, PSPRSMSYSATTR and PSPRSMSYSATTRVL. The import to the target upgrade environment was successful. A re-run of PTUPGCONVERT finished successfully.
The upgrade is progressing but where the duplicate keys are is still a mystery.
Cheers -
The following code and output illustrate the core of my prolems using HashMap. I've created a simple key class wrapping a string and implementing equals, hashcode and compareTo. Entries can only be accessied via iterators and duplicates appear. Using a TreeMap instead works fine, but I need the speed of hashmap for now.:
import java.util.*;
import java.io.*;
import java.awt.event.*;
import java.awt.*;
class GenericGraphKey implements Comparable{
private String val="";
public GenericGraphKey(String value){ val=value; }
public boolean equals(GenericGraphKey k){
System.out.println("equals()");
return val.equals(k.val);
public int hashCode() {
System.out.println("hashCode():"+val.hashCode());
return val.hashCode();
public int compareTo(Object o) {
System.out.println("compareTo()");
GenericGraphKey n=(GenericGraphKey)o;
return val.compareTo(n.val);
public String toString(){ return ("["+val+"]"); }
public class TestApp {
public static void main(String[] args) {
HashMap t=new HashMap();
System.out.print("A:");
GenericGraphKey k1=new GenericGraphKey("John");
System.out.println("Put "+k1+",a :"+(String)t.put(k1,"a") );
System.out.print("B:");
GenericGraphKey k2=new GenericGraphKey("John");
System.out.println("Get "+k2+":"+(String)t.get(k2));
System.out.print("C:");
GenericGraphKey k3=new GenericGraphKey("John");
System.out.println("Put "+k3+",b :"+(String)t.put(k3,"b"));
System.out.print("D:");
GenericGraphKey k4=new GenericGraphKey("John");
System.out.println("Get "+k4+":"+(String)t.get(k4));
System.out.print("E:");
GenericGraphKey k5=new GenericGraphKey("Jane");
System.out.println("Put "+k5+",c :"+(String)t.put(k5,"c"));
System.out.print("F:");
GenericGraphKey k6=new GenericGraphKey("Jane");
System.out.println("Get "+k6+":"+(String)t.get(k6));
System.out.print("G:");
GenericGraphKey k7=new GenericGraphKey("Allan");
System.out.println("Put "+k7+",d :"+(String)t.put(k7,"d"));
System.out.print("H:");
GenericGraphKey k8=new GenericGraphKey("Allan");
System.out.println("Put "+k8+",e :"+(String)t.put(k8,"e"));
System.out.print("I:");
GenericGraphKey k9=new GenericGraphKey("Allan");
System.out.println("Get "+k9+":"+(String)t.get(k9));
System.out.println();
Map.Entry e=null;
for (Iterator i=t.entrySet().iterator(); i.hasNext();){
e = (Map.Entry) i.next();
System.out.print("{"+(String)e.getValue()+"}");
System.out.println();
System.out.println(t.keySet());
Output:
A:hashCode():2314539
Put [John],a :null
B:hashCode():2314539
Get [John]:null
C:hashCode():2314539
Put [John],b :null
D:hashCode():2314539
Get [John]:null
E:hashCode():2301262
Put [Jane],c :null
F:hashCode():2301262
Get [Jane]:null
G:hashCode():63353198
Put [Allan],d :null
H:hashCode():63353198
Put [Allan],e :null
I:hashCode():63353198
Get [Allan]:null
{c}{e}{d}{b}{a}
[[Jane], [Allan], [Allan], [John], [John]]
Thanks to anyone who can get things moving on again.Hombre below is an illustration of what Chrisboy and jverd are telling you.
If the parent class has a method with the same signature as one in an interface which the child class implements then your IDE and javac will not say anything about you not haveing implemented the class in the interface.
The following code compiles and at least Netbeans also has no problems with it.
public class NewClass
public NewClass()
public void someMethod()
public interface NewInterface
public void someMethod();
public class NewClass2 extends NewClass implements NewInterface
public NewClass2()
} -
Requirement for object key for HashMap
Hi,
I would like to put the object to HashMap keyed by my own object. what is the requirement for the class to be key? I have a method boolean equals(Object o) defined in the key class, My key is composed by 2 ids, so in the equals method I compared the 2 ids. but it seems can't get the value out. Please help. ThanksHow do I supposed to do the hashCode? If myKey1.equals(myKey2) returns true, then myKey1.hashCode() must return the same value as myKey2.hashCode(). One consequence of this is that if something is not used to compuate equals(), then it must not be used to compute hashCode(). (Note that the reverse is not true. That is, if two objects are not equal, they can still return the same value from hashCode(), and hence, if some data is used for equals() it may still be left out of hashCode().)
You want hashCode to be 1) quick & easy to compute and 2) "well distributed" or "mostly unique. If you know how hashcodes are used (in the general sense, not just the Java hashCode() method) then you should understand why those properties are desirable. If not, just ask.
The most common approach is to use some subset of the object's fields in computing the hashCode. If you have a Person object with lastName, firstName, birthdate, height, weight, address, phone number, you probably wouldn't use all those fields. You could just lastName, or maybe a combination of lastName and firstName.
One generally combines multiple pieces by taking XORing (the "^") operator the individual pieces (for primitives) or their hashcodes (for objects). For example, in the Person example: public int hashCode() {
return lastName.hashCode() ^ firstName.hashCode(); // but make sure to check for null first
} -
How to store duplicate keys in HashMap
Hi,
sun guys,
please any one guide me how to store duplicate values in the haspmap.
i think we need override the equals and hash code methods. am i right??
if so guide me how to do it??
thanks in advance,
nagaraju.uppala wrote:
Hi,Hi,
sun guys,Most of the people who answer questions here, aren't from Sun.
please any one guide me how to store duplicate values in the haspmap.
i think we need override the equals and hash code methods. am i right??
if so guide me how to do it??Associate the key with a list or a set, and place the values in that list/set.
A put will then mean that you first call get to see if a list already is associated with the key. Place the value in the list in that case. Otherwise create a new list. Place the value in the list, and then call put with the list as value.
Kaj -
Db adaptor for insert- SQLException: [SQL0803] Duplicate key value specified
While invoking db adaptor for insert on table 1 selecting values form another table, i am gtting error ; before3 insert i am updating table 2nd using db adaptor
QUERY insert into CRPDTA.F5504579 (SELECT * FROM CRPDTA.F5504571 WHERE PAHDC=#v_updatedRecord_HDC)
Error :
Non Recoverable System Fault :
<bpelFault><faultType>0</faultType><bindingFault xmlns="http://schemas.oracle.com/bpel/extension"><part name="summary"><summary>Exception occured when binding was invoked. Exception occured during invocation of JCA binding: "JCA Binding execute of Reference operation 'insert_Ledger_F5504579' failed due to: Pure SQL Exception. Pure SQL Execute of insert into CRPDTA.F5504579 (SELECT * FROM CRPDTA.F5504571 WHERE PAHDC=?) failed. Caused by java.sql.SQLException: [SQL0803] Duplicate key value specified.. The Pure SQL option is for border use cases only and provides simple yet minimal functionality. Possibly try the "Perform an operation on a table" option instead. This exception is considered not retriable, likely due to a modelling mistake. To classify it as retriable instead add property nonRetriableErrorCodes with value "--803" to your deployment descriptor (i.e. weblogic-ra.xml). To auto retry a retriable fault set these composite.xml properties for this invoke: jca.retry.interval, jca.retry.count, and jca.retry.backoff. All properties are integers. ". The invoked JCA adapter raised a resource exception. Please examine the above error message carefully to determine a resolution. </summary></part><part name="detail"><detail>[SQL0803] Duplicate key value specified.</detail></part><part name="code"><code>-803</code></part></bindingFault></bpelFault>
Please suggest....Easter1976 wrote:
Hi please can you help me. I think I am having problems with tranactions. I am deleting from a table and then inserting in the same table with the same key that I have just deleted. Simple then - don't do that. It suggests a flaw in the design. Either use a new key or do an update.
Note that you would get a duplicate key error if the table is set up such that it doesn't
actually delete but doesn't something such as creating a log entry with a delete flag set. -
Cannot send a null Map key for type 'java.util.HashMap'
Hi All,
I am haing an issue with sending data from Server to the client using the AMF Channel.
Most of the method invocations on the RemoteObject are throwing the following Exception.
[CODE]
(mx.rpc.events::FaultEvent)#0
bubbles = false
cancelable = true
currentTarget = (null)
eventPhase = 2
fault = (mx.rpc::Fault)#1
content = (null)
errorID = 0
faultCode = "Server.Processing"
faultDetail = (null)
faultString = "Cannot send a null Map key for type 'java.util.HashMap'."
message = "faultCode:Server.Processing faultString:'Cannot send a null Map key for type 'java.util.HashMap'.' faultDetail:'null'"
name = "Error"
rootCause = (null)
headers = (null)
message = (mx.messaging.messages::ErrorMessage)#2
body = (null)
clientId = "22E55FB1-910E-312F-E37A-ED5167139CB0"
correlationId = "4DB54224-662A-C596-D165-F7C3EBB64DB8"
destination = "TimeMap"
extendedData = (null)
faultCode = "Server.Processing"
faultDetail = (null)
faultString = "Cannot send a null Map key for type 'java.util.HashMap'."
headers = (Object)#3
messageId = "22E56255-D62F-2ACF-4DA5-CF1E4D6353BB"
rootCause = (null)
timestamp = 1266877198902
timeToLive = 0
messageId = "22E56255-D62F-2ACF-4DA5-CF1E4D6353BB"
statusCode = 0
target = (null)
token = (mx.rpc::AsyncToken)#4
message = (mx.messaging.messages::RemotingMessage)#5
body = (Array)#6
clientId = (null)
destination = "TimeMap"
headers = (Object)#7
DSEndpoint = "my-amf"
DSId = "22E53936-7E0E-B21C-C936-EF1078000306"
messageId = "4DB54224-662A-C596-D165-F7C3EBB64DB8"
operation = "getMapKey"
source = (null)
timestamp = 0
timeToLive = 0
responders = (Array)#8
[0] (com.universalmind.cairngorm.events::Callbacks)#9
conflictHandler = (null)
faultHandler = (function)
priority = 0
resultHandler = (function)
result = (null)
type = "fault"
[CODE]
The Spring bean which is exposed as a Remote Object has the following method signature..
[CODE]
public String getMapKey() {
return mapKey;
[/CODE]
I am unable to understand why AMF Channel or Blaze DS is treating the String as HashMap !!!
This was working pefectly fine till yesterday !!
The version of the BlazeDS i am using is : blazeds_turnkey_3-0-0-544
and the Flex SDK Version is : flex_sdk_3.5.0.12683
We recently upgraded to Flex 3.5.0 version earlier we were using 3.3 version
Thanks
marsHi All,
I chcked my server side java beans ( which are managed by Spring) and they are all returning the data property and none of the Keys in the returned hashmaps are null.
Not sure why this is happening.
Thanks
kumars -
HashMap type object that allows multiple duplicate keys
Hello all,
I need an object that will allow me to have a key/value similar to a HashMap except that it will allow me to have duplicate keys.
ie....
//set(Key,Value)
set("Apple","red")
set("Apple","green")
set("Apple","brown")Any Ideas?
TIA!Nevermind,
I figured it out.
if(!map.containsKey(key))
ArrayList arraylist = new ArrayList();
arraylist.add(value);
map.put(key,arraylist);
else
ArrayList arraylist = (ArrayList) map.get(key);
arraylist.add(value);
}Thanks. -
Compaction for duplicate keys?
over time, as I insert multiple records for the same duplicate key, they may end up on different blocks of different log files.
does the cleaner thread do compaction so that all of the records are pulled out and merged onto neighboring blocks , even if no updates/deletes have happened?
Thanks
YangYes, the cleaner does this, but only to a limited degree.
And not with all configuration options. See EnvironmentConfig.CLEANER_LAZY_MIGRATION.
--mark -
I have an index constraint "IX_Tag_Processed" on the field "Tag_Name" for the table "Tag_Processed". I keep getting this constraint error:
Msg 2601, Level 14, State 1, Line 15
Cannot insert duplicate key row in object 'etag.Tag_Processed' with unique index 'IX_Tag_Processed'. The duplicate key value is (AZPS_TEMUWS0110BL4_CISO).
The statement has been terminated.
For this INSERT: I have tried using tagstg.Tag_Name NOT IN with same result:
INSERT into [Forecast_Data_Repository].[etag].[Tag_Processed] (Tag_Name, Tag_Type,Start_Datetime, End_Datetime, Source_SC, Sink_SC, Source_CA, Sink_CA, Source, Sink, Load_dt, Energy_product_code_id)
SELECT DISTINCT (Tag_Name), Tag_Type,Start_Datetime, End_Datetime, Source_SC, Sink_SC, Source_CA, Sink_CA, Source, Sink, GETUTCDATE(), [Forecast_Data_Repository].rscalc.GetStubbedEngProductCodeFromStaging(tagstg.Tag_Name)
FROM [Forecast_Data_Repository].[etag].[Tag_Stg] tagstg
WHERE tagstg.Id BETWEEN @minTId AND @maxTId --AND
--tagstg.Tag_Name NOT IN (
-- SELECT DISTINCT tproc.Tag_Name from [Forecast_Data_Repository].[etag].[Tag_Processed] tproc
thank you in advance,
Greg HansonI have even tried a merge with the same constraint error,
DECLARE @minTId bigint, @minTRId bigint, @minEId bigint
DECLARE @maxTId bigint, @maxTRId bigint, @maxEId bigint
DECLARE @errorCode int
DECLARE @ReturnCodeTypeIdName nvarchar(50)
SELECT @minTRId = Min(Id) FROM [etag].[Transmission_Stg]
SELECT @maxTRId = Max(Id) FROM [etag].[Transmission_Stg]
SELECT @minTId = Min(Id) FROM [etag].[Tag_Stg]
SELECT @maxTId = Max(Id) FROM [etag].[Tag_Stg]
DECLARE @MergeOutputTag TABLE
ActionType NVARCHAR(10),
InsertTagName NVARCHAR(50)
--UpdateTagName NVARCHAR(50)
--DeleteTagName NVARCHAR(50)
DECLARE @MergeOutputEnergy TABLE
ActionType NVARCHAR(10),
InsertTagId BIGINT
--UpdateTagName NVARCHAR(50)
--DeleteTagName NVARCHAR(50)
DECLARE @MergeOutputTransmission TABLE
ActionType NVARCHAR(10),
InsertTagId BIGINT
--UpdateTagName NVARCHAR(50)
--DeleteTagName NVARCHAR(50)
MERGE [Forecast_Data_Repository].[etag].[Tag_Processed] tagProc
USING [Forecast_Data_Repository].[etag].[Tag_Stg] tagStg
ON
tagProc.Tag_Name = tagStg.Tag_Name AND
tagProc.Tag_Type = tagStg.Tag_Type AND
tagProc.Start_Datetime = tagStg.Start_Datetime AND
tagProc.End_Datetime = tagStg.End_Datetime AND
tagProc.Source_SC = tagStg.Source_SC AND
tagProc.Source_CA = tagStg.Source_CA AND
tagProc.Sink_CA = tagStg.Sink_CA AND
tagProc.Source = tagStg.Source AND
tagProc.Sink = tagStg.Sink
WHEN MATCHED THEN
UPDATE
SET Tag_Name = tagStg.Tag_Name,
Tag_Type = tagStg.Tag_Type,
Start_DateTime = tagStg.Start_Datetime,
End_Datetime = tagStg.End_Datetime,
Source_SC = tagStg.Source_SC,
Sink_SC = tagStg.Sink_SC,
Source_CA = tagStg.Source_CA,
Sink_CA = tagStg.Sink_CA,
Source = tagStg.Source,
Sink = tagStg.Sink,
Load_dt = GETUTCDATE()
WHEN NOT MATCHED BY TARGET THEN
INSERT (Tag_Name, Tag_Type, Start_Datetime, End_Datetime, Source_SC, Sink_SC, Source_CA, Sink_CA, Source, Sink, Load_dt)
VALUES (tagStg.Tag_Name, tagStg.Tag_Type, tagStg.Start_Datetime, tagStg.End_Datetime, tagStg.Source_SC, tagStg.Sink_SC, tagStg.Source_CA, tagStg.Sink_CA, tagStg.Source, tagStg.Sink, GETUTCDATE())
OUTPUT
$action,
INSERTED.Tag_Name
--UPDATED.Tag_Name
INTO @MergeOutputTag;
SELECT * FROM @MergeOutputTag;
Greg Hanson -
Primary key for an column consisting duplicates
hi,
------->i have created a table and
-------> i have a column consisting of 1000 records (but where i have duplicates)
and now i want to create a primary key for the column
how can i do it....Hi,
You can find records which contains duplicate values for the table column using Oracle exceptions table. Please see a small demonstration:-
SQL> create table test1(id number);
Table created.
SQL> insert into test1 values(&id);
Enter value for id: 1
old 1: insert into test1 values(&id)
new 1: insert into test1 values(1)
1 row created.
SQL> /
Enter value for id: 2
old 1: insert into test1 values(&id)
new 1: insert into test1 values(2)
1 row created.
SQL> /
Enter value for id: 3
old 1: insert into test1 values(&id)
new 1: insert into test1 values(3)
1 row created.
SQL> /
Enter value for id: 1
old 1: insert into test1 values(&id)
new 1: insert into test1 values(1)
1 row created.
SQL> /
Enter value for id: 3
old 1: insert into test1 values(&id)
new 1: insert into test1 values(3)
1 row created.
SQL> /
Enter value for id: 4
old 1: insert into test1 values(&id)
new 1: insert into test1 values(4)
1 row created.
SQL> /
Enter value for id: 5
old 1: insert into test1 values(&id)
new 1: insert into test1 values(5)
1 row created.
SQL> commit;
Commit complete.
SQL> alter table test1 add constraint id_pk primary key(id);
alter table test1 add constraint id_pk primary key(id)
ERROR at line 1:
ORA-02437: cannot validate (SYS.ID_PK) - primary key violated
SQL> alter table test1 add constraint id_pk primary key(id) exceptions into exceptions;
alter table test1 add constraint id_pk primary key(id) exceptions into exceptions
ERROR at line 1:
ORA-02445: Exceptions table not found
SQL> @?/rdbms/admin/utlexcpt
Table created.
SQL> alter table test1 add constraint id_pk primary key(id) exceptions into exceptions;
alter table test1 add constraint id_pk primary key(id) exceptions into exceptions
ERROR at line 1:
ORA-02437: cannot validate (SYS.ID_PK) - primary key violated
SQL> desc exceptions
Name Null? Type
ROW_ID ROWID
OWNER VARCHAR2(30)
TABLE_NAME VARCHAR2(30)
CONSTRAINT VARCHAR2(30)
SQL> select * from exceptions;
ROW_ID OWNER TABLE_NAME CONSTRAINT
AAAc95AABAAA9EpAAD SYS TEST1 ID_PK
AAAc95AABAAA9EpAAA SYS TEST1 ID_PK
AAAc95AABAAA9EpAAE SYS TEST1 ID_PK
AAAc95AABAAA9EpAAC SYS TEST1 ID_PK
SQL> select * from test1 where rowid in(select row_id from exceptions);
ID
3
1
1
3
Thanks
Edited by: rarain on May 28, 2013 12:10 PM -
Optimal read write performance for data with duplicate keys
Hi,
I am constructing a database that will store data with duplicate keys.
For each key (a String) there will be multiple data objects, there is no upper limit to the number of data objects, but let's say there could be a million.
Data objects have a time-stamp (Long) field and a message (String) field.
At the moment I write these data objects into the database in chronological order, as i receive them, for any given key.
When I retrieve data for a key, and iterate across the duplicates for any given primary key using a cursor they are fetched in ascending chronological order.
What I would like to do is start fetching these records in reverse order, say just the last 10 records that were written to the database for a given key, and was wondering if anyone had some suggestions on the optimal way to do this.
I have considered writing data out in the order that i want to retrieve it, by supplying the database with a custom duplicate comparator. If I were to do this then the query above would return the latest data first, and I would be able to iterate over the most recent inserts quickly. but Is there a performance penalty paid on writing to the database if I do this?
I have also considered using the time-stamp field as the unique primary key for the primary database instead of the String, and creating a secondary database for the String, this would allow me to index into the data using a cursor join, but I'm not certain it would be any more performant, at least not on writing to the database, since it would result in a very flat b-tree.
Is there a fundamental choice that I will have to make between write versus read performance? Any suggestions on tackling this much appreciated.
Many Thanks,
JoelHi Joel,
Using a duplicate comparator will slow down Btree access (writes and reads) to
some degree because the comparator is called a lot during searching. But
whether this is a problem depends on whether your app is CPU bound and how much
CPU time your comparator uses. If you can avoid de-serializing the object in
the comparator, that will help. For example, if you keep the timestamp at the
beginning of the data and only read the one long timestamp field in your
comparator, that should be pretty fast.
Another approach is to store the negation of the timestamp so that records
are sorted naturally in reverse timestamp order.
Another approach is to read backwards using a cursor. This takes a couple
steps:
1) Find the last duplicate for the primary key you're interested in:
cursor.getSearchKey(keyOfInterest, ...)
status = cursor.getNextNoDup(...)
if (status == SUCCESS) {
// Found the next primary key, now back up one record.
status = cursor.getPrev(...)
} else {
// This is the last primary key, find the last record.
status = cursor.getLast(...)
}2) Scan backwards over the duplicates:
while (status == SUCCESS) {
// Process one record
// Move backwards
status = cursor.getPrev(...)
}Finally another approach is to use a two-part primary key: {string,timestamp}.
Duplicates are not configured because every key is unique. I mention this
because using duplicates in JE has more overhead than using a unique primary
key. You can combine this with either of the above approaches -- using a
comparator, negating the timestamp, or scanning backwards.
--mark -
Putting Duplicate Keys in a Hashtable.Please assist
Folks,
I wud like to put duplicate keys into a Hashtable,so that the Hashtable looks like:
Entertainment,Video;
Entertainment,Pictures;
Entertainment,Camera;
where Entertainment is the key:
So I am using the flwg code:
public class TestHashTable {
public static void main(String[] args){
Hashtable balance = new Hashtable();
HashMap hm = new HashMap();
Enumeration names;
String str;
balance.put("Entertainment",helperMethodAddToList(hm, "Entertainment", "Camera"));
balance.put("Entertainment",helperMethodAddToList(hm, "Entertainment", "Video"));
balance.put("Entertainment",helperMethodAddToList(hm, "Entertainment", "Pictures"));
names = balance.keys();
while(names.hasMoreElements()){
str = (String)names.nextElement();
System.out.println(str + ":" + balance.get(str));
} // End of main.
private static List helperMethodAddToList(Map m, String key, String value) {
List vals = (List)m.get(key);
if (vals == null) {
vals = new LinkedList();
m.put(key, vals);
vals.add(value);
return vals;
}The output comes this way:
Entertainment:[Camera, Video, Pictures]
I dont want it this way but as shown on 1st Line
Is this possible? Am I missing something?import java.util.*;
public class X {
public static void main(String[] args) {
HashMap map = new HashMap();
map.put("a", toList(new String[] {"alabama", "arkansas", "alaska"}));
map.put("n", toList(new String[] {"nevada", "new mexico", "north dakota"}));
map.put("w", toList(new String[] {"wyoming", "west virgina"}));
dump(map);
static List toList(String[] strings) {
return new ArrayList(Arrays.asList(strings));
static void dump(Map map) {
for(Iterator i = map.entrySet().iterator(); i.hasNext(); ) {
Map.Entry entry = (Map.Entry) i.next();
System.out.print("key=" + entry.getKey() + ", values=");
List values = (List) entry.getValue();
for(Iterator j=values.iterator(); j.hasNext(); ) {
String state = (String) j.next();
System.out.print(state + ", ");
System.out.println();
} -
I come across a problem when handling duplicate keys retrieved from query result.
It is not hard I believe, but I can not find solution
the problem is:
I have a query that will retrieve the rows like structure
product_A, aaa
product_A, bbb
product_A, ccc
product_A, ddd
product_A, eee
product_B, 111
product_B, 222
product_B, 33
product_B, 334
product_C, 212
product_C, 411
In jsp page, I can do iteratoring to get each element like product_x and number display row by row in table
Now the requirement is changed to get no duplicate key(product_X) displayed in page, but all the number that belongs to same product key should be displayed alongside product key display.
that means to display like this:
product_A aaa bbb ccc ddd eee
product_B 111 222 33 334
product_C 212 411
The condition is i can not change the original query, what i need to do reorganize the each row object and change the display like above
I was trying to add each elements include product key and number to the hashmap as key value pair
and then plan to do data structure change. But the hashmap does not support duplicate keys. Now i have no idea of implemeting this
Any of you has solution to it?
Appreciated!
Very junior programmerMy testing code according to what you guys suggest as below:
Hashtable map = new Hashtable();
String[] strArray1 = { "PRODUCT_A","PRODUCT_A","PRODUCT_A","PRODUCT_A","PRODUCT_A","PRODUCT_A",
"PRODUCT_B","PRODUCT_B","PRODUCT_B","PRODUCT_B","PRODUCT_B","PRODUCT_B",
"PRODUCT_C","PRODUCT_C","PRODUCT_C","PRODUCT_C","PRODUCT_C","PRODUCT_C",
String[] strArray2 = { "1000","1001","1002","1003","1004","1005",
"2000","2001","2002","2003","2004","2005",
"3000","3001","3002","3003","3004","3005"};
for (int i = 0; i < strArray1.length; i++) {
String productKey = strArray1;
String productNumber = strArray2[i];
List list = (ArrayList) map.get(productKey);
if (list == null)
map.put(productKey, list = new ArrayList());
list.add(productNumber);
System.out.println(" map.size(): " + map.size());
Enumeration emuKey = map.keys();
while(emuKey.hasMoreElements()){
String productKey = (String)emuKey.nextElement();
System.out.println("PRODUCT: " + productKey);
ArrayList list = (ArrayList)map.get(productKey);
for(int i=0; i<list.size();i++){
System.out.println("list["+i+"]: " + (String)list.get(i));
output:
map.size(): 3
PRODUCT: PRODUCT_C
list[0]: 3000
list[1]: 3001
list[2]: 3002
list[3]: 3003
list[4]: 3004
list[5]: 3005
PRODUCT: PRODUCT_B
list[0]: 2000
list[1]: 2001
list[2]: 2002
list[3]: 2003
list[4]: 2004
list[5]: 2005
PRODUCT: PRODUCT_A
list[0]: 1000
list[1]: 1001
list[2]: 1002
list[3]: 1003
list[4]: 1004
list[5]: 1005
With all your suggestion, it finally works. thanks!
Maybe you are looking for
-
Setting a New Default "Output To" Template in the Render Que Does Not Work
I'm trying to change the default "Output To" template in the Render Que to not show the timecode for a PSD file, but when I change the default template by clicking on "Custom" from the dropdown menu and using the checkbox to set a preset as the defau
-
Forms Migration to 11g, RUN_PRODUCT must be declared
Hi All, I'm performing forms migration from 10g to 11g, I used Forms Migration Assistant to Migrate forms, But if a form is calling reports then those forms are not getting compiled and giving the error RUN_PRODUCT must be declared. I know that RUN_P
-
Should I wait for the Venice or go for the DFI? Do official MSI reps exist here?
I love the feature-rich Neo4 SLI, but the board (along with my winnie) is just pissing me off! I chucked about $1500 on my new rig and (of course) can't clock over 215 FSB. Anyway, my board just recently died after a little over a month's use and
-
Customer care moving to outright ignoring people
Hi everyone, I just wanted to highlight the tip of the iceberg, the straw that broke the camels back. After 2 weeks with BT and the abysymal cusotmer service experience, I think we will be moving elsewhere. As its the middle of my work day, I was b
-
Went to edit a few of my bookmarks in the bookmark bar but for some reason i cant seem to access anywhere to edit them or re-arrange them. [IMG]http://i2.photobucket.com/albums/y45/thelibertine1982/12345.jpg[/IMG] Any ideas whats going on?