Making a hash of hashes

hi,
I am trying to make a hash of hashes to store various runtime attributes. This hash of hashes is stored in an object that is accessed by multiple server threads as well as being accessed by a timer thread every two minutes.
The data is of the form:
{user}
        {last_in}
        {status}
        {message}
                     {messageID}
                     {messageID}where {} denotes a hash key. So essentially the first hash has the users as the keys. Each user has a hash (again) for their particular attributes and we also have a message list.
My first run at this object was to have this message list as a hash but I am having problems adding a message. I can add the message but when the function call is left, the message no longer appears in the data structure.
Thinking this was a reference problem I managed to do the 'put' in one line as well so to avoid any temporary variables. Sadly this still didn't work.
Initially I am using Hashtables. I'm sure I'm missing something obvious ;)
Code follows, I'm still working on it so the comments aren't finished. Pertinent though are the functions addUser, addMessage and getMessageList:
// ==========================================================================
// Class:               Server Data Storage
// Lead Developer:     Simon Proctor
// Module filename: DataObject.java
// Module package:     ark.util.command
// Version                 Author           Date          Comment
// =========            ========     ========     ===============
// 1.0                      Simon          04/09/01     Initial Build
// Notes
// ===========================
// Abstracts the data storage away from the server so that the implementation
// and storage are separate.
// ==========================================================================
package ark.util.command;
import java.util.*;
public class DataObject
     // Class variables
     private static final Hashtable datastore = new Hashtable();
     private int messagecount = 1;
     public DataObject()
     public Set keySet()
          return this.datastore.keySet();
     public synchronized Object getData(String key)
          return  datastore.get(key);
     public synchronized boolean exists(String key)
          if(datastore.contains(key))
               return true;
          else
               return false;
     public synchronized void setData(String key, String value)
          datastore.put(key,value);
     public synchronized void addUser(String user)
          datastore.put(user,
               new Hashtable()
          Hashtable temp = (Hashtable) datastore.get(user);
          temp.put(new String("status"),"online");
          temp.put(new String("message"),new Hashtable());
          lastIn(user);
     public synchronized void addMessage(String user, String message)
          if(! exists(user))
               addUser(user);
          lastIn(user);
          messagecount++;
          Hashtable users = new Hashtable((Hashtable)datastore.get(user));
          Hashtable messages = new Hashtable((Hashtable)users.get("message"));
          Integer tempcount = new Integer(messagecount);
          messages.put(new String(tempcount.toString()),message);
          //((Hashtable)((Hashtable)datastore.get(user)).get("message")).put(new String(tempcount.toString()),message);
          System.out.println("Just tried :" + messagecount + " " + message);
          Enumeration print_test = messages.keys();
          while(print_test.hasMoreElements())
               String index = (String)print_test.nextElement();
               System.out.println(index + " " + messages.get(index));
     public synchronized String getMessage(String user, int messageID)
          if(! exists(user))
               addUser(user);
          lastIn(user);
          Hashtable users = (Hashtable)datastore.get(user);
          Hashtable messages = (Hashtable)users.get("message");
          String msg = (String)messages.get(new Integer(messageID));
          messages.remove(new Integer(messageID));
          return msg;
     public synchronized String getMessageList(String user)
          if(! exists(user))
               addUser(user);
          lastIn(user);
System.out.println("Looking at list for: " + user);
          Hashtable users = (Hashtable)datastore.get(user);
          Hashtable messages = (Hashtable)users.get("message");
          Enumeration message_set = messages.keys();
          StringBuffer temp = new StringBuffer();
          // Debug print
          Enumeration print_test = users.keys();
          while(print_test.hasMoreElements())
               System.out.println(print_test.nextElement());
          while(message_set.hasMoreElements())
               String message = (String)message_set.nextElement();
System.out.println("got : " + message);
              temp.append(message);
               temp.append(",");
          String list = temp.toString();
System.out.println("We now have : " + list);
          if(list.endsWith(","))
               return list.substring(0,list.length() - 1);
          else
               return list;
     public synchronized void setStatus(String user, String status)
          if(! exists(user))
               addUser(user);
          lastIn(user);
          Hashtable temp = (Hashtable)datastore.get(user);
          temp.put("status",status);
     public synchronized String getUserList(String user)
          if(! exists(user))
               addUser(user);
          lastIn(user);
          Enumeration user_set = datastore.keys();
          StringBuffer temp = new StringBuffer();
          while(user_set.hasMoreElements())
               String index = (String)user_set.nextElement();
              temp.append(index);
               temp.append(",");
          String list = temp.toString();
          if(list.endsWith(","))
               return list.substring(0,list.length() - 1);
          else
               return list;
     public synchronized void removeUser(String user)
          datastore.remove(user);
     private synchronized void lastIn(String user)
          // Set the last used status
          Date now = new Date();
          long time = now.getTime();
          Hashtable users = (Hashtable)datastore.get(user);
          users.put("last_in",new Long(time));
}

I tried the following
     public synchronized void addMessage(String user, String message)
          if(! exists(user))
               addUser(user);
          lastIn(user);
          messagecount++;
          Hashtable users = (Hashtable)datastore.get(user);
          Hashtable messages = (Hashtable)users.get("message");
          Integer tempcount = new Integer(messagecount);
          messages.put(new String(tempcount.toString()),message);
          //((Hashtable)((Hashtable)datastore.get(user)).get("message")).put(new String(tempcount.toString()),message);
          System.out.println("Just tried :" + messagecount + " " + message);
          Enumeration print_test = messages.keys();
          while(print_test.hasMoreElements())
               String index = (String)print_test.nextElement();
               System.out.println(index + " " + messages.get(index));
     }But still to no avail ;/.

Similar Messages

  • Collection MAP HASh map hash table

    Hi
    i wan to be familiar with Collection MAP API hasp map & hash table
    please guide me...
    tell me resource link where i can get more information about it.
    Bunty

    Thanks Jos
    it is more theortical You just skimmed through it for at most eleven minutes. How can you
    tell it's 'more theoretical'? It is a tutorial you know ...
    kind regards,
    Jos

  • Problem with (hash)LOGO(hash)

    Hi,
    I have set up the #LOGO# for my application and it works fine in templates.
    Is there any way to access this in a region or item?
    I have tried a few different things, and it won't work.
    Otherwise I'll just have to create a new custom template.
    Regards
    Michael

    Hi,
    While I'm on the subject, I have 11 substitutions in my application.
    But the view apex_application_substitutions only has 10 records for my application.
    The last one on the list is missing.
    This is not a case of a new record not being saved, this has been in the application for months.
    Any Ideas?
    Regards
    Michael

  • Is there a way to change the type of hash for passwords in active direcroty?

    I would like to know what type of hash used for active directory passwords and if there is a way to change the type of hash.

    Hi,
    Windows generates and stores user account passwords by using two different password representations, generally known as "hashes." When you set or change the password for a user account to a password that contains fewer than 15 characters, Windows generates
    both a LAN Manager hash (LM hash) and a Windows NT hash (NT hash) of the password.
    You can configure the type to only store the stronger NT hash of your password.
    How to prevent Windows from storing a LAN manager hash of your password in Active Directory and local SAM databases
    http://support.microsoft.com/kb/299656/en-us
    Regards,
    Mandy
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Query Degradation--Hash Join Degraded

    Hi All,
    I found one query degradation issue.I am on 10.2.0.3.0 (Sun OS) with optimizer_mode=ALL_ROWS.
    This is a dataware house db.
    All 3 tables involved are parition tables (with daily partitions).Partitions are created in advance and ELT jobs loads bulk data into daily partitions.
    I have checked that CBO is not using local indexes-created on them which i believe,is appropriate because when i used INDEX HINT, elapsed time increses.
    I checked giving index hint for all tables one by one but dint get any performance improvement.
    Partitions are daily loaded and after loading,partition-level stats are gathered with dbms_stats.
    We are collecting stats at partition level(granularity=>'PARTITION').Even after collecting global stats,there is no change in access pattern.Stats gather command is given below.
    PROCEDURE gather_table_part_stats(i_owner_name,i_table_name,i_part_name,i_estimate:= DBMS_STATS.AUTO_SAMPLE_SIZE, i_invalidate IN VARCHAR2 := 'Y',i_debug:= 'N')
    Only SOT_KEYMAP.IPK_SOT_KEYMAP is GLOBAL.Rest all indexes are LOCAL.
    Earlier,we were having BIND PEEKING issue,which i fixed but introducing NO_INVALIDATE=>FALSE in stats gather job.
    Here,Partition_name (20090219) is being passed through bind variables.
    SELECT a.sotrelstg_sot_ud sotcrct_sot_ud,
    b.sotkey_ud sotcrct_orig_sot_ud, a.ROWID stage_rowid
    FROM (SELECT sotrelstg_sot_ud, sotrelstg_sys_ud,
    sotrelstg_orig_sys_ord_id, sotrelstg_orig_sys_ord_vseq
    FROM sot_rel_stage
    WHERE sotrelstg_trd_date_ymd_part = '20090219'
    AND sotrelstg_crct_proc_stat_cd = 'N'
    AND sotrelstg_sot_ud NOT IN(
    SELECT sotcrct_sot_ud
    FROM sot_correct
    WHERE sotcrct_trd_date_ymd_part ='20090219')) a,
    (SELECT MAX(sotkey_ud) sotkey_ud, sotkey_sys_ud,
    sotkey_sys_ord_id, sotkey_sys_ord_vseq,
    sotkey_trd_date_ymd_part
    FROM sot_keymap
    WHERE sotkey_trd_date_ymd_part = '20090219'
    AND sotkey_iud_cd = 'I'
    --not to select logical deleted rows
    GROUP BY sotkey_trd_date_ymd_part,
    sotkey_sys_ud,
    sotkey_sys_ord_id,
    sotkey_sys_ord_vseq) b
    WHERE a.sotrelstg_sys_ud = b.sotkey_sys_ud
    AND a.sotrelstg_orig_sys_ord_id = b.sotkey_sys_ord_id
    AND NVL(a.sotrelstg_orig_sys_ord_vseq, 1) = NVL(b.sotkey_sys_ord_vseq, 1);
    During normal business hr, i found that query takes 5-7 min(which is also not acceptable), but during high load business hr,it is taking 30-50 min.
    I found that most of the time it is spending on HASH JOIN (direct path write temp).We have sufficient RAM (64 GB total/41 GB available).
    Below is the execution plan i got during normal business hr.
    | Id  | Operation                 | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
    |   1 |  HASH GROUP BY            |                     |      1 |      1 |   7844K|00:05:28.78 |      16M|    217K|  35969 |       |       |          |         |
    |*  2 |   HASH JOIN               |                     |      1 |      1 |   9977K|00:04:34.02 |      16M|    202K|  20779 |   580M|    10M|  563M (1)|     650K|
    |   3 |    NESTED LOOPS ANTI      |                     |      1 |      6 |   7855K|00:01:26.41 |      16M|   1149 |      0 |       |       |          |         |
    |   4 |     PARTITION RANGE SINGLE|                     |      1 |    258K|   8183K|00:00:16.37 |   25576 |   1149 |      0 |       |       |          |         |
    |*  5 |      TABLE ACCESS FULL    | SOT_REL_STAGE       |      1 |    258K|   8183K|00:00:16.37 |   25576 |   1149 |      0 |       |       |          |         |
    |   6 |     PARTITION RANGE SINGLE|                     |   8183K|    326K|    327K|00:01:10.53 |      16M|      0 |      0 |       |       |          |         |
    |*  7 |      INDEX RANGE SCAN     | IDXL_SOTCRCT_SOT_UD |   8183K|    326K|    327K|00:00:53.37 |      16M|      0 |      0 |       |       |          |         |
    |   8 |    PARTITION RANGE SINGLE |                     |      1 |    846K|     14M|00:02:06.36 |     289K|    180K|      0 |       |       |          |         |
    |*  9 |     TABLE ACCESS FULL     | SOT_KEYMAP          |      1 |    846K|     14M|00:01:52.32 |     289K|    180K|      0 |       |       |          |         |
    I will attached the same for high load business hr once query gives results.It is still executing for last 50 mins.
    INDEX STATS (INDEXES ARE LOCAL INDEXES)
    TABLE_NAME                          INDEX_NAME                          COLUMN_NAME        COLUMN_POSITION   NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
    SOT_REL_STAGE                       IDXL_SOTRELSTG_SOT_UD               SOTRELSTG_SOT_UD                 1   25461560      25461560            184180
    SOT_REL_STAGE                                                           SOTRELSTG_TRD_DATE               2   25461560      25461560            184180
                                                                            _YMD_PART
    TABLE_NAME                          INDEX_NAME                          COLUMN_NAME        COLUMN_POSITION   NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
    SOT_KEYMAP                          IDXL_SOTKEY_ENTORDSYS_UD            SOTKEY_ENTRY_ORD_S               1 1012306940             3          38308680
                                                                            YS_UD
    SOT_KEYMAP                          IDXL_SOTKEY_HASH                    SOTKEY_HASH                      1 1049582320    1049582320        1049579520
    SOT_KEYMAP                                                              SOTKEY_TRD_DATE_YM               2 1049582320    1049582320        1049579520
                                                                            D_PART
    SOT_KEYMAP                          IDXL_SOTKEY_SOM_ORD                 SOTKEY_SOM_UD                    1 1023998560     268949136         559414840
    SOT_KEYMAP                                                              SOTKEY_SYS_ORD_ID                2 1023998560     268949136         559414840
    SOT_KEYMAP                          IPK_SOT_KEYMAP                      SOTKEY_UD                        1 1030369480    1015378900          24226580
    TABLE_NAME                          INDEX_NAME                          COLUMN_NAME        COLUMN_POSITION   NUM_ROWS DISTINCT_KEYS CLUSTERING_FACTOR
    SOT_CORRECT                         IDXL_SOTCRCT_SOT_UD                 SOTCRCT_SOT_UD                   1  412484756     412484756         411710982
    SOT_CORRECT                                                             SOTCRCT_TRD_DATE_Y               2  412484756     412484756         411710982
                                                                            MD_PART
    INDEX partiton stas (from dba_ind_partitions)
    INDEX_NAME                     PARTITION_NAME       STATUS       BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR   NUM_ROWS SAMPLE_SIZE LAST_ANALYZ GLO
    IDXL_SOTCRCT_SOT_UD            P20090219            USABLE            1         372        327879            216663     327879      327879 20-Feb-2009 YES
    IDXL_SOTKEY_ENTORDSYS_UD       P20090219            USABLE            2        2910             3             36618     856229      856229 19-Feb-2009 YES
    IDXL_SOTKEY_HASH               P20090219            USABLE            2        7783        853956            853914     853956      119705 19-Feb-2009 YES
    IDXL_SOTKEY_SOM_ORD            P20090219            USABLE            2        6411        531492            157147     799758      132610 19-Feb-2009 YES
    IDXL_SOTRELSTG_SOT_UD          P20090219            USABLE            2       13897       9682052             45867    9682052      794958 20-Feb-2009 YESThanks in advance.
    Bhavik Desai

    Hi Randolf,
    Thanks for the time you spent on this issue.I appreciate it.
    Please see my comments below:
    1. You've mentioned several times that you're passing the partition name as bind variable, but you're obviously testing the statement with literals rather than bind
    variables. So your tests obviously don't reflect what is going to happen in case of the actual execution. The cardinality estimates are potentially quite different when
    using bind variables for the partition key.
    Yes.I intentionaly used literals in my tests.I found couple of times that plan used by the application and plan generated by AUTOTRACE+EXPLAIN PLAN command...is same and
    caused hrly elapsed time.
    As i pointed out earlier,last month we solved couple of bind peeking issue by intproducing NO_VALIDATE=>FALSE in stats gather procedure,which we execute just after data
    load into such daily partitions and before start of jobs which executes this query.
    Execution plans From AWR (with parallelism on at table level DEGREE>1)-->This plan is one which CBO has used when degradation occured.This plan is used most of the times.
    ELAPSED_TIME_DELTA BUFFER_GETS_DELTA DISK_READS_DELTA CURSOR(SELECT*FROMTA
            1918506000          46154275              918 CURSOR STATEMENT : 4
    CURSOR STATEMENT : 4
    PLAN_TABLE_OUTPUT
    SQL_ID 39708a3azmks7
    SELECT A.SOTRELSTG_SOT_UD SOTCRCT_SOT_UD, B.SOTKEY_UD SOTCRCT_ORIG_SOT_UD, A.ROWID STAGE_ROWID FROM (SELECT SOTRELSTG_SOT_UD,
    SOTRELSTG_SYS_UD, SOTRELSTG_ORIG_SYS_ORD_ID, SOTRELSTG_ORIG_SYS_ORD_VSEQ FROM SOT_REL_STAGE WHERE SOTRELSTG_TRD_DATE_YMD_PART = :B1 AND
    SOTRELSTG_CRCT_PROC_STAT_CD = 'N' AND SOTRELSTG_SOT_UD NOT IN( SELECT SOTCRCT_SOT_UD FROM SOT_CORRECT WHERE SOTCRCT_TRD_DATE_YMD_PART =
    :B1 )) A, (SELECT MAX(SOTKEY_UD) SOTKEY_UD, SOTKEY_SYS_UD, SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ, SOTKEY_TRD_DATE_YMD_PART FROM
    SOT_KEYMAP WHERE SOTKEY_TRD_DATE_YMD_PART = :B1 AND SOTKEY_IUD_CD = 'I' GROUP BY SOTKEY_TRD_DATE_YMD_PART, SOTKEY_SYS_UD,
    SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ) B WHERE A.SOTRELSTG_SYS_UD = B.SOTKEY_SYS_UD AND A.SOTRELSTG_ORIG_SYS_ORD_ID =
    B.SOTKEY_SYS_ORD_ID AND NVL(A.SOTRELSTG_ORIG_SYS_ORD_VSEQ, 1) = NVL(B.SOTKEY_SYS_ORD_VSEQ, 1)
    Plan hash value: 1213870831
    | Id  | Operation                     | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT              |                     |       |       | 19655 (100)|          |       |       |        |      |            |
    |   1 |  PX COORDINATOR               |                     |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)         | :TQ10003            |     1 |   116 | 19655   (1)| 00:05:54 |       |       |  Q1,03 | P->S | QC (RAND)  |
    |   3 |    HASH GROUP BY              |                     |     1 |   116 | 19655   (1)| 00:05:54 |       |       |  Q1,03 | PCWP |            |
    |   4 |     PX RECEIVE                |                     |     1 |   116 | 19655   (1)| 00:05:54 |       |       |  Q1,03 | PCWP |            |
    |   5 |      PX SEND HASH             | :TQ10002            |     1 |   116 | 19655   (1)| 00:05:54 |       |       |  Q1,02 | P->P | HASH       |
    |   6 |       HASH GROUP BY           |                     |     1 |   116 | 19655   (1)| 00:05:54 |       |       |  Q1,02 | PCWP |            |
    |   7 |        NESTED LOOPS ANTI      |                     |     1 |   116 | 19654   (1)| 00:05:54 |       |       |  Q1,02 | PCWP |            |
    |   8 |         HASH JOIN             |                     |     1 |   102 | 19654   (1)| 00:05:54 |       |       |  Q1,02 | PCWP |            |
    |   9 |          PX JOIN FILTER CREATE| :BF0000             |    13M|   664M|  2427   (3)| 00:00:44 |       |       |  Q1,02 | PCWP |            |
    |  10 |           PX RECEIVE          |                     |    13M|   664M|  2427   (3)| 00:00:44 |       |       |  Q1,02 | PCWP |            |
    |  11 |            PX SEND HASH       | :TQ10000            |    13M|   664M|  2427   (3)| 00:00:44 |       |       |  Q1,00 | P->P | HASH       |
    |  12 |             PX BLOCK ITERATOR |                     |    13M|   664M|  2427   (3)| 00:00:44 |   KEY |   KEY |  Q1,00 | PCWC |            |
    |  13 |              TABLE ACCESS FULL| SOT_REL_STAGE       |    13M|   664M|  2427   (3)| 00:00:44 |   KEY |   KEY |  Q1,00 | PCWP |            |
    |  14 |          PX RECEIVE           |                     |    27M|  1270M| 17209   (1)| 00:05:10 |       |       |  Q1,02 | PCWP |            |
    |  15 |           PX SEND HASH        | :TQ10001            |    27M|  1270M| 17209   (1)| 00:05:10 |       |       |  Q1,01 | P->P | HASH       |
    |  16 |            PX JOIN FILTER USE | :BF0000             |    27M|  1270M| 17209   (1)| 00:05:10 |       |       |  Q1,01 | PCWP |            |
    |  17 |             PX BLOCK ITERATOR |                     |    27M|  1270M| 17209   (1)| 00:05:10 |   KEY |   KEY |  Q1,01 | PCWC |            |
    |  18 |              TABLE ACCESS FULL| SOT_KEYMAP          |    27M|  1270M| 17209   (1)| 00:05:10 |   KEY |   KEY |  Q1,01 | PCWP |            |
    |  19 |         PARTITION RANGE SINGLE|                     | 16185 |   221K|     0   (0)|          |   KEY |   KEY |  Q1,02 | PCWP |            |
    |  20 |          INDEX RANGE SCAN     | IDXL_SOTCRCT_SOT_UD | 16185 |   221K|     0   (0)|          |   KEY |   KEY |  Q1,02 | PCWP |            |
    Other Execution plan from AWR
    ELAPSED_TIME_DELTA BUFFER_GETS_DELTA DISK_READS_DELTA CURSOR(SELECT*FROMTA
            1053251381                 0             2925 CURSOR STATEMENT : 4
    CURSOR STATEMENT : 4
    PLAN_TABLE_OUTPUT
    SQL_ID 39708a3azmks7
    SELECT A.SOTRELSTG_SOT_UD SOTCRCT_SOT_UD, B.SOTKEY_UD SOTCRCT_ORIG_SOT_UD, A.ROWID STAGE_ROWID FROM (SELECT SOTRELSTG_SOT_UD,
    SOTRELSTG_SYS_UD, SOTRELSTG_ORIG_SYS_ORD_ID, SOTRELSTG_ORIG_SYS_ORD_VSEQ FROM SOT_REL_STAGE WHERE SOTRELSTG_TRD_DATE_YMD_PART = :B1 AND
    SOTRELSTG_CRCT_PROC_STAT_CD = 'N' AND SOTRELSTG_SOT_UD NOT IN( SELECT SOTCRCT_SOT_UD FROM SOT_CORRECT WHERE SOTCRCT_TRD_DATE_YMD_PART =
    :B1 )) A, (SELECT MAX(SOTKEY_UD) SOTKEY_UD, SOTKEY_SYS_UD, SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ, SOTKEY_TRD_DATE_YMD_PART FROM
    SOT_KEYMAP WHERE SOTKEY_TRD_DATE_YMD_PART = :B1 AND SOTKEY_IUD_CD = 'I' GROUP BY SOTKEY_TRD_DATE_YMD_PART, SOTKEY_SYS_UD,
    SOTKEY_SYS_ORD_ID, SOTKEY_SYS_ORD_VSEQ) B WHERE A.SOTRELSTG_SYS_UD = B.SOTKEY_SYS_UD AND A.SOTRELSTG_ORIG_SYS_ORD_ID =
    B.SOTKEY_SYS_ORD_ID AND NVL(A.SOTRELSTG_ORIG_SYS_ORD_VSEQ, 1) = NVL(B.SOTKEY_SYS_ORD_VSEQ, 1)
    Plan hash value: 3434900850
    | Id  | Operation                     | Name                | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT              |                     |       |       |  1830 (100)|          |       |       |        |      |            |
    |   1 |  PX COORDINATOR               |                     |       |       |            |          |       |       |        |      |            |
    |   2 |   PX SEND QC (RANDOM)         | :TQ10003            |     1 |   131 |  1830   (2)| 00:00:33 |       |       |  Q1,03 | P->S | QC (RAND)  |
    |   3 |    HASH GROUP BY              |                     |     1 |   131 |  1830   (2)| 00:00:33 |       |       |  Q1,03 | PCWP |            |
    |   4 |     PX RECEIVE                |                     |     1 |   131 |  1830   (2)| 00:00:33 |       |       |  Q1,03 | PCWP |            |
    |   5 |      PX SEND HASH             | :TQ10002            |     1 |   131 |  1830   (2)| 00:00:33 |       |       |  Q1,02 | P->P | HASH       |
    |   6 |       HASH GROUP BY           |                     |     1 |   131 |  1830   (2)| 00:00:33 |       |       |  Q1,02 | PCWP |            |
    |   7 |        NESTED LOOPS ANTI      |                     |     1 |   131 |  1829   (2)| 00:00:33 |       |       |  Q1,02 | PCWP |            |
    |   8 |         HASH JOIN             |                     |     1 |   117 |  1829   (2)| 00:00:33 |       |       |  Q1,02 | PCWP |            |
    |   9 |          PX JOIN FILTER CREATE| :BF0000             |  1010K|    50M|   694   (1)| 00:00:13 |       |       |  Q1,02 | PCWP |            |
    |  10 |           PX RECEIVE          |                     |  1010K|    50M|   694   (1)| 00:00:13 |       |       |  Q1,02 | PCWP |            |
    |  11 |            PX SEND HASH       | :TQ10000            |  1010K|    50M|   694   (1)| 00:00:13 |       |       |  Q1,00 | P->P | HASH       |
    |  12 |             PX BLOCK ITERATOR |                     |  1010K|    50M|   694   (1)| 00:00:13 |   KEY |   KEY |  Q1,00 | PCWC |            |
    |  13 |              TABLE ACCESS FULL| SOT_KEYMAP          |  1010K|    50M|   694   (1)| 00:00:13 |   KEY |   KEY |  Q1,00 | PCWP |            |
    |  14 |          PX RECEIVE           |                     |    11M|   688M|  1129   (3)| 00:00:21 |       |       |  Q1,02 | PCWP |            |
    |  15 |           PX SEND HASH        | :TQ10001            |    11M|   688M|  1129   (3)| 00:00:21 |       |       |  Q1,01 | P->P | HASH       |
    |  16 |            PX JOIN FILTER USE | :BF0000             |    11M|   688M|  1129   (3)| 00:00:21 |       |       |  Q1,01 | PCWP |            |
    |  17 |             PX BLOCK ITERATOR |                     |    11M|   688M|  1129   (3)| 00:00:21 |   KEY |   KEY |  Q1,01 | PCWC |            |
    |  18 |              TABLE ACCESS FULL| SOT_REL_STAGE       |    11M|   688M|  1129   (3)| 00:00:21 |   KEY |   KEY |  Q1,01 | PCWP |            |
    |  19 |         PARTITION RANGE SINGLE|                     |  5209 | 72926 |     0   (0)|          |   KEY |   KEY |  Q1,02 | PCWP |            |
    |  20 |          INDEX RANGE SCAN     | IDXL_SOTCRCT_SOT_UD |  5209 | 72926 |     0   (0)|          |   KEY |   KEY |  Q1,02 | PCWP |            |
    EXECUTION PLAN AFTER SETTING DEGREE=1 (It was also degraded)
    | Id  | Operation                 | Name                | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT          |                     |     1 |   129 |       | 42336   (2)| 00:12:43 |       |       |
    |   1 |  HASH GROUP BY            |                     |     1 |   129 |       | 42336   (2)| 00:12:43 |       |       |
    |   2 |   NESTED LOOPS ANTI       |                     |     1 |   129 |       | 42335   (2)| 00:12:43 |       |       |
    |*  3 |    HASH JOIN              |                     |     1 |   115 |    51M| 42334   (2)| 00:12:43 |       |       |
    |   4 |     PARTITION RANGE SINGLE|                     |   846K|    41M|       |  8241   (1)| 00:02:29 |    81 |    81 |
    |*  5 |      TABLE ACCESS FULL    | SOT_KEYMAP          |   846K|    41M|       |  8241   (1)| 00:02:29 |    81 |    81 |
    |   6 |     PARTITION RANGE SINGLE|                     |  8161K|   490M|       | 12664   (3)| 00:03:48 |    81 |    81 |
    |*  7 |      TABLE ACCESS FULL    | SOT_REL_STAGE       |  8161K|   490M|       | 12664   (3)| 00:03:48 |    81 |    81 |
    |   8 |    PARTITION RANGE SINGLE |                     |  6525K|    87M|       |     1   (0)| 00:00:01 |    81 |    81 |
    |*  9 |     INDEX RANGE SCAN      | IDXL_SOTCRCT_SOT_UD |  6525K|    87M|       |     1   (0)| 00:00:01 |    81 |    81 |
    Predicate Information (identified by operation id):
       3 - access("SOTRELSTG_SYS_UD"="SOTKEY_SYS_UD" AND "SOTRELSTG_ORIG_SYS_ORD_ID"="SOTKEY_SYS_ORD_ID" AND
                  NVL("SOTRELSTG_ORIG_SYS_ORD_VSEQ",1)=NVL("SOTKEY_SYS_ORD_VSEQ",1))
       5 - filter("SOTKEY_TRD_DATE_YMD_PART"=20090219 AND "SOTKEY_IUD_CD"='I')
       7 - filter("SOTRELSTG_CRCT_PROC_STAT_CD"='N' AND "SOTRELSTG_TRD_DATE_YMD_PART"=20090219)
       9 - access("SOTRELSTG_SOT_UD"="SOTCRCT_SOT_UD" AND "SOTCRCT_TRD_DATE_YMD_PART"=20090219)2. Why are you passing the partition name as bind variable? A statement executing 5 mins. best, > 2 hours worst obviously doesn't suffer from hard parsing issues and
    doesn't need to (shouldn't) share execution plans therefore. So I strongly suggest to use literals instead of bind variables. This also solves any potential issues caused
    by bind variable peeking.
    This is a custom application which uses bind variables to extract data from daily partitions.So,daily automated data extract from daily paritions after load and ELT process.
    Here,Value of bind variable is being passed through a procedure parameter.It would be bit difficult to use literals in such application.
    3. All your posted plans suffer from bad cardinality estimates. The NO_MERGE hint suggested by Timur only caused a (significant) damage limitation by obviously reducing
    the row source size by the group by operation before joining, but still the optimizer is way off, apart from the obviously wrong join order (larger row set first) in
    particular the NESTED LOOP operation is causing the main troubles due to excessive logical I/O, as already pointed out by Timur.
    Can i ask for alternatives to NESTED LOOP?
    4. Your PLAN_TABLE seems to be old (you should see a corresponding note at the bottom of the DBMS_XPLAN.DISPLAY output), because none of the operations have a
    filter/access predicate information attached. Since your main issue are the bad cardinality estimates, I strongly suggest to drop any existing PLAN_TABLEs in any non-Oracle
    owned schemas because 10g already provides one in the SYS schema (GTT PLAN_TABLE$) exposed via a public synonym, so that the EXPLAIN PLAN information provides the
    "Predicate Information" section below the plan covering the "Filter/Access" predicates.
    Please post a revised explain plan output including this crucial information so that we get a clue why the cardinality estimates are way off.
    I have dropped the old plan.Got above execution plan(listed above in first point) with PREDICATE information.
    "As already mentioned the usage of bind variables for the partition name makes this issue potentially worse."
    Is there any workaround without replacing bind variable.I am on 10g so 11g's feature will not help !!!
    How are you gathering the statistics daily, can you post the exact command(s) used?
    gather_table_part_stats(i_owner_name,i_table_name,i_part_name,i_estimate:= DBMS_STATS.AUTO_SAMPLE_SIZE, i_invalidate IN VARCHAR2 := 'Y',i_debug:= 'N')
    Thanks & Regards,
    Bhavik Desai

  • Fundamental question on etherchannel hashing.

    Folks,
    I had some very basic questions on the way hashing takes place on Cisco switches connected to each other over a port channel.
    Considering that 2 links are used between the switch, namely port Gi 0/1 and Gi 0/2.
    1) By default I understand that this could be a L2 etherchannel or L3 etherchannel, the default mode of hashing is still the Source and destination MAC id, am I correct?
    2) So now if the source IP of 1.1.1.1 needs to reach to 1.1.1.2 how would the hashing calculation take place? Again I understand that this is on the basis of the last 3 bits from the IP address field. 001 and 010 come to 011 which is 3. So based on the number of ports it will select which interface to assign.
    I still do not understand what has the below output got to do with the selection:
    Core-01#sh interfaces port-channel 40 etherchannel
    Age of the Port-channel   = 204d:07h:40m:10s
    Logical slot/port   = 14/3          Number of ports = 2
    GC                  = 0x00000000      HotStandBy port = null
    Passive port list   = Te1/4 Te2/4
    Port state          = Port-channel L3-Ag Ag-Inuse
    Protocol            =    -
    Port security       = Disabled
    Fast-switchover     = disabled
    Fast-switchover Dampening = disabled
    Load share deferral = disabled
    Ports in the Port-channel:
    Index   Load      Port          EC state       No of bits
    ------+------+------------+------------------+-----------
     0      E4            Te1/4                 On   4
     1      1B            Te2/4                 On   4
    Time since last port bundled:    204d:07h:19m:44s    Te2/4
    Time since last port Un-bundled: 204d:07h:19m:54s    Te1/4
    Core-01#
    What does that load-value mean? and how is that calculated? and does that have a role to play in the selection of the port?
    Thanks,
    Nik

    Disclaimer
    The Author of this posting offers the information contained within this posting without consideration and with the reader's understanding that there's no implied or expressed suitability or fitness for any purpose. Information provided is for informational purposes only and should not be construed as rendering professional advice of any kind. Usage of this posting's information is solely at reader's own risk.
    Liability Disclaimer
    In no event shall Author be liable for any damages whatsoever (including, without limitation, damages for loss of use, data or profit) arising out of the use or inability to use the posting's information even if Author has been advised of the possibility of such damage.
    Posting
    So it looks like how to divide the bits is decided by the switch.
    Ah, yea, reread my original post for answer to your second question.  ;)
    An actual hashing algorithm can be rather complicated.  What they try to do is to "fairly" divide the input value across the number of hash buckets.  For example, if you have two Etherchannel links, they will try to hash the hash attribute such that 50% goes to one link and 50% to the other link.
    For example, assuming you hash on source IP, how do you hash 192.168.1.1, .2, .3 and .4?  A very simple hash algorithm could just go "odd"/"even", but what if your IPs were .2, .4, .6 and .8?  A more complex algorithm would try to still hash these IPs equally across your two links.
    Again, hashing algorithms can get complex, search the Internet on the subject, and you may better appreciate this.
    Because of the possibility of such complexity, one vendor's hashing might be better than another vendor's, so they often will keep their actual algorithm secret.
    No matter, though, how good a hashing algorithm might be, they all will hash the same attribute to the same bucket.  For example, if you're still using source IP, and only source IP, as your hash attribute, all traffic from the same host is going to use the same link.  This is why what attributes are used are important.  Choices vary per Cisco platform.  Normally, you want the hash attributes to differ between every different traffic flow.
    Etherchannel does not analyze actual link loading.  If there are only two active flows, and both hash to the same link, that link might become saturated, while another link is completely idle.
    Short term Etherchannel usage, unless you're dealing with lots of flows, is often not very balanced.  Longer term Etherchannel usage, should be almost balanced.  If not, most likely you're not using the optimal hash attributes for your traffic.

  • Hash Count Query

    Hi Experts,
    We have business requirement in which we need to create Hash Count Queries in SAP Abap by joining different tables.
    Are these queries similar to SAP Queries which we create using transaction SQ03, SQ02/SQ01/SQ00?
    I think hash table/hash count/hash algorithm is more to the SQL (database) side rather than SAP, also I didnu2019t find any documentation for Hash Queries.
    So could you please help me with above issue.
    If they are different could you please explain the process to create them?
    Also what are the advantages of Hash Queries?
    Thanks.

    These are normal infoset queries which we create using transaction SQ03, SQ02/SQ01/SQ00.

  • Hash created using DBMS_CRYPTO.Hash and md5sum, sha1sum not maching.

    Hi,
    I have PL/SQL script MD5SH1.sql to generate md5 and sha1 hash, but hash generated by this script and
    hash generated by md5sum and sha1sum utility not matching.
    Pl. tell me what is wrong,
    ------------------MD5SH1.sql----------------
    set serveroutput on
    DECLARE
    input_string VARCHAR2 (200) := 'Lord of the ring'';
    output_str_md5 VARCHAR2 (200);
    output_str_sh1 VARCHAR2 (200);
    hash_raw_md5 RAW(2000);
    hash_raw_sh1 RAW(2000);
    hash_algo_type1 PLS_INTEGER := DBMS_CRYPTO.HASH_MD5;
    hash_algo_type2 PLS_INTEGER := DBMS_CRYPTO.HASH_SH1;
    BEGIN
    DBMS_OUTPUT.PUT_LINE ( 'Original string: ' || input_string);
    hash_raw_md5 := DBMS_CRYPTO.Hash
    src => UTL_I18N.STRING_TO_RAW (input_string, NULL),
    --src => input_string,
    typ => hash_algo_type1
    hash_raw_sh1 := DBMS_CRYPTO.Hash
    src => UTL_I18N.STRING_TO_RAW (input_string, NULL),
    --src => input_string,
    typ => hash_algo_type2
    output_str_md5 := UTL_I18N.RAW_TO_CHAR (hash_raw_md5,'US7ASCII');
    output_str_sh1 := UTL_I18N.RAW_TO_CHAR (hash_raw_sh1,'US7ASCII');
    --output_str_md5 := hash_raw_md5;
    --output_str_sh1 := hash_raw_sh1;
    --dbms_output.put_line(hash_raw_md5);
    --dbms_output.put_line(rawtohex(hash_raw_sh1));
    DBMS_OUTPUT.PUT_LINE ('MD5 Hash: ' || output_str_md5);
    DBMS_OUTPUT.PUT_LINE ('SH1 Hash: ' || output_str_sh1);
    END;
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The O/p of this script is
    SYS@mydb> @MD5SH1.sql
    Original string: Lord of the ring
    MD5 Hash: �����+.v�`�a_A
    SH1 Hash:
    ����E�k�[�F[c�2�
    PL/SQL procedure successfully completed.
    SYS@mydb>
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    The o/p of md5sum and sha1sum is
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    [san@sde4 ~]$ echo "Lord of the ring" |md5sum
    f3fbb6dfd8a2d6f8f6aeabc4d6e17c57 -
    [san@sde4 ~]$ echo "Lord of the ring" |sha1sum
    856f6132e23c7e335ca4188bd45c7bc9515f6905 -
    [san@sde4 ~]$
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Why these 2 hashes are not matching?
    -Santosh Mhaskar

    You should take in account, that echo command adds a carriage return by default.
    SQL> set serveroutput on
    SQL> set echo on
    SQL> declare
      2    input_string       varchar2(200) := 'Lord of the ring'||chr(10);
      3    output_str_md5  varchar2(200);
      4    output_str_sh1  varchar2(200);
      5    hash_raw_md5       raw(2000);
      6    hash_raw_sh1       raw(2000);
      7    hash_algo_type1 pls_integer := dbms_crypto.hash_md5;
      8    hash_algo_type2 pls_integer := dbms_crypto.hash_sh1;
      9
    10  begin
    11    dbms_output.put_line('Original string: ' || input_string);
    12    hash_raw_md5      := dbms_crypto.hash(src => utl_raw.cast_to_raw(input_string),
    13                                       typ => hash_algo_type1);
    14    hash_raw_sh1      := dbms_crypto.hash(src => utl_raw.cast_to_raw(input_string),
    15                                       typ => hash_algo_type2);
    16    dbms_output.put_line('MD5 Hash: ' || lower(rawtohex(hash_raw_md5)));
    17    dbms_output.put_line('SH1 Hash: ' || lower(rawtohex(hash_raw_sh1)));
    18  end;
    19  /
    Original string: Lord of the ring
    MD5 Hash: f3fbb6dfd8a2d6f8f6aeabc4d6e17c57
    SH1 Hash: 856f6132e23c7e335ca4188bd45c7bc9515f6905
    PL/SQL procedure successfully completed.
    SQL> !echo "MD5 Hash:" `echo 'Lord of the ring' | md5sum`
    MD5 Hash: f3fbb6dfd8a2d6f8f6aeabc4d6e17c57 -
    SQL> !echo "SH1 Hash:" `echo 'Lord of the ring' | sha1sum`
    SH1 Hash: 856f6132e23c7e335ca4188bd45c7bc9515f6905 -Best regards
    Maxim

  • What is hash algorithm?

    Hi Everyone,
    Please see this article and explain me what is hash algorithm? How it works internally?
    Hash Partitioning
    Hash partitioning maps data to partitions based on a hashing algorithm that Oracle
    applies to the partitioning key that you identify. The hashing algorithm evenly
    distributes rows among partitions, giving partitions approximately the same size.
    Hash partitioning is the ideal method for distributing data evenly across devices. Hash
    partitioning is also an easy-to-use alternative to range partitioning, especially when
    the data to be partitioned is not historical or has no obvious partitioning key.Can we say it will act an Index?
    Regards,
    BS2012.
    Edited by: BS2012 on May 17, 2013 4:53 PM

    Hi,
    I was previously checking some basic stuffs in other sites and got suffered. Because of that, I want to confirm everything in forum itself. I got my answer. Thanks anyway for your reply.
    Regards,
    BS2012.

  • Hash Partitioning

    Hi All,
    Does hash partitioning always use the same hashing function, and will it always produce the same result if a new table is created with the same number of hash partitions hashed on the same field?
    For example, I have to join a multi-million record data set to table1 this morning. table1 is hash partitioned on row_id into 32 partitions.
    If I create a temp table to hold the data I want to join and hash partition it likewise into 32 partitions on row_id, will any given record from partition number N in my new table find its match in partition number N of table?
    If so, that would allow us to join one partition at a time which performs exponentially better in the resource-contested environment.
    I hope you can help.

    Using 10gR2
    Partition pruning does occur when joined to a global temporary table with hash partitioning. Providing the parimary key on the global temp table is the key used for hashing the relational table:
    SQL> create table t (
      2    a number)
      3    partition by hash(a) (
      4      partition p1 ,
      5      partition p2 ,
      6      partition p3 ,
      7      partition p4
      8    )
      9  /
    Table created.
    SQL>
    SQL> alter table t add (constraint t_pk primary key (a)
      2  using index local (partition p1_idx
      3                   , partition p2_idx
      4                   , partition P3_idx
      5                   , partition p4_idx)
      6  )
      7  /
    Table altered.
    SQL> insert into t (a) values (1);
    1 row created.
    SQL> insert into t (a) values (2);
    1 row created.
    SQL> insert into t (a) values (3);
    1 row created.
    SQL>  insert into t (a) values (4);
    1 row created.
    SQL> commit;
    Commit complete.
    SQL>
    SQL> create global temporary table tm (a number)
      2  /
    Table created.
    SQL> insert into tm (a) values (2);
    1 row created.
    SQL> set autotrace traceonly explain
    SQL> select tm.a from tm, t
      2  where tm.a = t.a
      3  /
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=26)
       1    0   NESTED LOOPS (Cost=2 Card=1 Bytes=26)
       2    1     TABLE ACCESS (FULL) OF 'TM' (TABLE (TEMP)) (Cost=2 Card=
              1 Bytes=13)
       3    1     PARTITION HASH (ITERATOR) (Cost=0 Card=1 Bytes=13)
       4    3       INDEX (UNIQUE SCAN) OF 'T_PK' (INDEX (UNIQUE)) (Cost=0
               Card=1 Bytes=13)As you can see from the above, a full scan was performed on the global temp table, but partition pruning occured on TM. So, in theory, whatever data you load the global temp table with, will be matched to the partition.
    P;

  • View Password hash in Active Directory

    Hi all
    I am the administrator and i want to view the password hashes of the users  in Active Directory. Please tell me how i can view the password hashes of the users. Where are the password hashes of the users  stored in Active Directory.

    Hi,
    Before going further, let’s clarify how Windows store password.
    Instead of storing the user account password in clear-text, Windows generates and stores user account passwords by using two different password representations, generally known as "hashes." When you set or change the password for a user account to a password that contains fewer than 15 characters, Windows generates both a LAN Manager hash (LM hash) and a Windows NT hash (NT hash) of the password. These hashes are stored in the local Security Accounts Manager (SAM) database (C:\Windows\System32\config\SAM file) or in Active Directory (C:\Windows\NTDS\ntds.dit file on DCs).
    You can force Windows to use NT Hash password. For detailed information, please refer to the following article.
    How to prevent Windows from storing a LAN manager hash of your password in Active Directory and local SAM databases
    http://support.microsoft.com/kb/299656
    After you configure Password History, Active Directory service will check the password hash stored in AD database to determine if user meet the requirement. Administrator doesn’t need to view or use password hash.
    Regarding the security of password, the following article may be helpful.
    Should you worry about password cracking?
    http://blogs.technet.com/jesper_johansson/archive/2005/10/13/410470.aspx
    Hope this information can be helpful.
    This posting is provided "AS IS" with no warranties, and confers no rights.

  • Two different HASH GROUP BY in execution plan

    Hi ALL;
    Oracle version
    select *From v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - ProductionSQL
    select company_code, account_number, transaction_id,
    decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,
    (last_day(to_date('04/21/2010','MM/DD/YYYY')) - min(z.accounting_date) ) age,sum(z.amount)
    from
         select /*+ PARALLEL(use, 2) */    company_code,substr(account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,use.amount,use.accounting_date
         from financials.unbalanced_subledger_entries use
         where use.accounting_date >= to_date('04/21/2010','MM/DD/YYYY')
         and use.accounting_date < to_date('04/21/2010','MM/DD/YYYY') + 1
    UNION ALL
         select /*+ PARALLEL(se, 2) */  company_code, substr(se.account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,se.amount,se.accounting_date
         from financials.temp2_sl_snapshot_entries se,financials.account_numbers an
         where se.account_number = an.account_number
         and an.subledger_type in ('C', 'AC')
    ) z
    group by company_code,account_number,transaction_id,decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type)
    having abs(sum(z.amount)) >= 0.01explain plan
    Plan hash value: 1993777817
    | Id  | Operation                      | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT               |                              |       |       | 76718 (100)|          |        |      |            |
    |   1 |  PX COORDINATOR                |                              |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)          | :TQ10002                     |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | P->S | QC (RAND)  |
    |*  3 |    FILTER                      |                              |       |       |            |          |  Q1,02 | PCWC |            |
    |   4 |     HASH GROUP BY              |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | PCWP |            |
    |   5 |      PX RECEIVE                |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | PCWP |            |
    |   6 |       PX SEND HASH             | :TQ10001                     |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,01 | P->P | HASH       |
    |   7 |        HASH GROUP BY           |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,01 | PCWP |            |
    |   8 |         VIEW                   |                              |    15M|  2055M| 76116   (1)| 00:15:14 |  Q1,01 | PCWP |            |
    |   9 |          UNION-ALL             |                              |       |       |            |          |  Q1,01 | PCWP |            |
    |  10 |           PX BLOCK ITERATOR    |                              |    11 |   539 |  1845   (1)| 00:00:23 |  Q1,01 | PCWC |            |
    |* 11 |            TABLE ACCESS FULL   | UNBALANCED_SUBLEDGER_ENTRIES |    11 |   539 |  1845   (1)| 00:00:23 |  Q1,01 | PCWP |            |
    |* 12 |           HASH JOIN            |                              |    15M|   928M| 74270   (1)| 00:14:52 |  Q1,01 | PCWP |            |
    |  13 |            BUFFER SORT         |                              |       |       |            |          |  Q1,01 | PCWC |            |
    |  14 |             PX RECEIVE         |                              |    21 |   210 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |  15 |              PX SEND BROADCAST | :TQ10000                     |    21 |   210 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |* 16 |               TABLE ACCESS FULL| ACCOUNT_NUMBERS              |    21 |   210 |     2   (0)| 00:00:01 |        |      |            |
    |  17 |            PX BLOCK ITERATOR   |                              |    25M|  1250M| 74183   (1)| 00:14:51 |  Q1,01 | PCWC |            |
    |* 18 |             TABLE ACCESS FULL  | TEMP2_SL_SNAPSHOT_ENTRIES    |    25M|  1250M| 74183   (1)| 00:14:51 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - filter(ABS(SUM(SYS_OP_CSR(SYS_OP_MSR(SUM("Z"."AMOUNT"),MIN("Z"."ACCOUNTING_DATE")),0)))>=.01)
      11 - access(:Z>=:Z AND :Z<=:Z)
           filter(("USE"."ACCOUNTING_DATE"<TO_DATE(' 2010-04-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "USE"."ACCOUNTING_DATE">=TO_DATE(' 2010-04-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
      12 - access("SE"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
      16 - filter(("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C'))
      18 - access(:Z>=:Z AND :Z<=:Z)I have few doubts regarding this execution plan and i am sure my questions would get answered here.
    Q-1: Why am i getting two different HASH GROUP BY operations (Operation id 4 & 7) even though there is only a single GROUP BY clause ? Is that due to UNION ALL operator that is merging two different row sources and HASH GROUP BY is being applied on both of them individually ?
    Q-2: What does 'BUFFER SORT' (Operation id 13) indicate ? Some time i got this operation and sometime i am not. For some other queries, i have observing around 10GB TEMP space and high cost against this operation. So just curious about whether it is really helpful ? if no, how to avoid that ?
    Q-3: Under PREDICATE Section, what does step 18 suggest ? I am not using any filter like this ? access(:Z>=:Z AND :Z<=:Z)

    aychin wrote:
    Hi,
    About BUFFER SORT, first of all it is not specific for Parallel Executions. This step in the plan indicates that internal sorting have a place. It doesn't mean that rows will be returned sorted, in other words it doesn't guaranty that rows will be sorted in resulting row set, because it is not the main purpose of this operation. I've previously suggested that the "buffer sort" should really simply say "buffering", but that it hijacks the buffering mechanism of sorting and therefore gets reported completely spuriously as a sort. (see http://jonathanlewis.wordpress.com/2006/12/17/buffer-sorts/ ).
    In this case, I think the buffer sort may be a consequence of the broadcast distribution - and tells us that the entire broadcast is being buffered before the hash join starts. It's interesting to note that in the recent of the two plans with a buffer sort the second (probe) table in the hash join seems to be accessed first and broadcast before the first table is scanned to allow the join to occur.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"Science is more than a body of knowledge; it is a way of thinking"+
    +Carl Sagan+                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Flex 4.0 HMAC.hash binary data

    I am trying to replicate a function I have in php, in my flex code:
    If PHP:
    $mac = base64_encode(hash_hmac('sha256', $message, $secret, true));
    If the last element is a true the hash_hmac is outputted as raw binary data, if it is false outputs lowercase hexits.
    I have the following code written in Flex:
    encryptedString = HMAC.hash(tempString,message,algorithm);
    In flex I am using the following imports:
         import com.adobe.crypto.HMAC;
         import com.adobe.crypto.SHA256;
    However, it outputs it as lowercase hexit. I can not find out how to convert the Flex output to binary data to match the PHP.
    Does anyone know?

    I've solved this problem by adding self-made function in HMAC Class:
    * Performs the HMAC hash algorithm using string.
    * This function equals php "hash_hmac" function when "raw_output" parameter is set to true.
    * @param secret The secret key
    * @param message The message to hash
    * @param algorithm Hash object to use
    * @return raw binary data as ByteArray
    public static function getRawHash(secret:String, message:String, algorithm:Object = null):ByteArray {
              var hexHash:String = hash(secret, message, algorithm);
              var hashBinary:ByteArray = new ByteArray();
              for (var i:uint = 0; i < hexHash.length; i += 2) {
                   var c:String = hexHash.charAt(i) + hexHash.charAt(i + 1);
                   hashBinary.writeByte(parseInt(c, 16));
              return hashBinary;
    After, if you neen "base64" string you can do something like that:
    private function createHashForSign(secretKey:String, message:String):String {
              var hash:ByteArray = HMAC.getRawHash(secretKey, message, SHA1);
              var baseEncoder:Base64Encoder = new Base64Encoder();
              baseEncoder.encodeBytes(hash, 0, hash.length);
              return baseEncoder.toString();

  • Ask hash algortihm in partitioning table by hash

    hi all,
    I wanted to ask about the hash algortihm in partitioning table by hash .How hash algorithm evenly distribute the data on each partisi??anybody know??

    A simple calculation probably isn't going to be possible. The problem is that a simple hash function is unlikely to produce the nearly uniform data distribution and a function that produces the nearly uniform data distribution is unlikely to be particularly simple to compute. ORA_HASH has to balance the desire to be quick to compute against the desire to produce nearly uniform data. I don't know what algorithm Oracle picked or where relatively they made the trade off-- I expect that the algorithm may well change across releases.
    Conceptually, though, if you had a perfect hash algorithm, it would provide a perfect "fingerprint" of the data and any valid input would have an equal a priori probability of being mapped anywhere in the valid output set-- just like a perfect encryption algorithm generates encrypted output that would appear totally random. If you can look at a piece of data and know that its hash was more likely to be in one part of the output set than another, that would mean that there would be an opportunity to attack the hash-- you could probabilistically reconstruct data if you knew the hash.
    If you assume that ORA_HASH is a perfect hash (it probably isn't, but it's probably close enough for this analysis so long as the range is a power of 2), then for any given input, any output is equally likely. If you create a hash partitioned table with 8 partitions, that basically equates to asking ORA_HASH to generate a hash that is between 0 and 7 and puts the data in whichever partition comes out. Since each value is equally likely to be in any of the partitions, you'd expect that the data would be equally distributed. Unless of course there is a lot of repetition in the values in the column you are partitioning by-- the hash for two identical values is identical, so that sort of repetitive data would cause the data distribution to be unequal.
    To take a simplistic example, let's assume you were hashing numbers and let's say you have 8 partitions. The simplest possible hash algorithm would be "mod 8". That is, if you insert a value x, insert it into the partiition (x mod 8). If x = 2, 2 mod 8 = 2, so put it in partition 2. If x = 10, 10 mod 8 = 2, so put it in partition 2. If x = 14, 14 mod 8 = 6 so put it in partition 6. If the numeric values that you insert are uniformly distributed (not unlikely if you're dealing with large amounts of data and very likely if you're dealing with sequence-generated primary keys), all 8 partitions will be roughly equally full.
    Of course, this algorithm is far from perfect-- if all the data you are inserting is a power of 2, for example, then all the odd partitions would be empty and the even numbered partitions would be full. ORA_HASH needs to be quite a bit more complex than a simple mod N in order to provide more uniform mapping even if there are patterns in the input.
    Justin

  • ACE URL Hash

    Hi All
    I had an issue with ACE 2 year before where..sending all youtube traffic to same cache while using URL hash. I had below response from Cisco TAC..
    Any1 knows if the new image resolved this...?
    Regarding your question about the used predictor and "splitting" the requests going to youtube to be handled by two caches, please note that the URL hashing will hash the domain name up to the "?" only, so we unfortunately cannot distinguish the caches to which to send the request when using this predictor method. The "?" is the default URL parsing delimiter.
    Therefore, what we could try is changing the predictor method to another type, for example hash destination|source address or round robin to verify if the loads gets distributed among the caches more evenly.
    There, we can see that you can specify a begin- and end-pattern to look
    for a specific pattern within an URL, too, however, as already stated,
    the hashing has no effect after the "?".
    Regards
    Sameer Shah

    The ACE module and ACE 4710 appliance were enhanced so that the url hashing predictor now includes the url query seen after the '?' delimiter.
    ACE module: A2(2.1) and later (BugID CSCsq99736)
    ACE 4710: A3(2.2) and later (BugID CSCsr30433)

Maybe you are looking for