URGENT: ora_hash AND hash collision

We need to compare two tables residing in two different database instances(both on Oracle 10g ver2).
I have heard of the ora_hash SQL function provided as part of Oracle 10g.
However, i need to enquire as to what is the probability for a hash collision.
As such the max bucket size of ora_hash is equal to (2^32-1) does that imply that there will not be any hash collisions for a dataset of 2^32-1 records.
Appreciate your thought in this regard.
-Thanks!

May be you want this .. But this will only tell you that the tables are identical or not .
SQL> select * from emp;
SQL>
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
         1 gggg
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81       1000                    30
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10
15 rows selected.
SQL>     
SQL> create table emp1 nologging as select * from emp where empno <> 1;
Table created.
SQL> SELECT * FROM EMP1;
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7900 JAMES      CLERK           7698 03-DEC-81       1000                    30
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10
14 rows selected.
SQL>
SQL> create table emp2 nologging as select * from emp ;
Table created.
SQL> UPDATE EMP2 SET COMM = NVL(COMM,0) ;
15 rows updated.
SQL> COMMIT;
SQL> SELECT * FROM EMP2;
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
         1 gggg                                                          0
      7369 SMITH      CLERK           7902 17-DEC-80        800          0         20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975          0         20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850          0         30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450          0         10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000          0         20
      7839 KING       PRESIDENT            17-NOV-81       5000          0         10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7876 ADAMS      CLERK           7788 23-MAY-87       1100          0         20
      7900 JAMES      CLERK           7698 03-DEC-81       1000          0         30
      7902 FORD       ANALYST         7566 03-DEC-81       3000          0         20
      7934 MILLER     CLERK           7782 23-JAN-82       1300          0         10
15 rows selected.
SQL>
SQL> create table emp3 nologging as select * from emp ;
Table created.
SQL> SELECT * FROM EMP3;
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
         1 gggg
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81       1000                    30
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10
15 rows selected.
SQL>
SQL>  COLUMN HASH_CODE FORMAT 999999999999999999999999
SQL>  select sum(ora_hash(EMPNO||ENAME||JOB||MGR||HIREDATE||SAL||COMM||DEPTNO, 4294967294)) HASH_CODE from EMP;
                HASH_CODE
              34307672633
SQL>  select sum(ora_hash(EMPNO||ENAME||JOB||MGR||HIREDATE||SAL||COMM||DEPTNO, 4294967294)) HASH_CODE from EMP1;
                HASH_CODE
              31467742359
SQL>  select sum(ora_hash(EMPNO||ENAME||JOB||MGR||HIREDATE||SAL||COMM||DEPTNO, 4294967294)) HASH_CODE from EMP2;
                HASH_CODE
              31292750688
SQL>  select sum(ora_hash(EMPNO||ENAME||JOB||MGR||HIREDATE||SAL||COMM||DEPTNO, 4294967294)) HASH_CODE from EMP3;
                HASH_CODE
              34307672633
SQL>Now observe hash code from the table emp & emp3 are same because they are identical.. in other cases hash codes are different.

Similar Messages

  • Regd: Usage of ora_hash AND hash collision

    We need to compare two tables residing in two different database instances(both on Oracle 10g ver2).
    I have heard of the ora_hash SQL function provided as part of Oracle 10g.
    However, i need to enquire as to what is the probability for a hash collision.
    As such the max bucket size of ora_hash is equal to (2^32-1) does that imply that there will not be any hash collisions for a dataset of 2^32-1 records.
    Appreciate your thought in this regard.
    -Thanks!

    user636222 wrote:
    We need to compare two tables residing in two different database instances(both on Oracle 10g ver2).
    I have heard of the ora_hash SQL function provided as part of Oracle 10g.
    However, i need to enquire as to what is the probability for a hash collision.
    As such the max bucket size of ora_hash is equal to (2^32-1) does that imply that there will not be any hash collisions for a dataset of 2^32-1 records.
    Appreciate your thought in this regard.There is no guarantee to be free of hash collision regardless of the bucket size chosen, so if you need 100% accuracy you'll always have to check in case of equal hash values if the two rows are actually the same or not.
    So running the "minus" variant is probably more straightforward but - depending on the table sizes - might require a very large and resource/space/time consuming sort operation.
    Depending on your requirements you might be able to combine both approaches, may be you can already reduce the set to compare by eliminating all those having different hash values since these are guaranteed to be different.
    In order to lower the probability you could have a look at DBMS_CRYPTO.HASH, which offers 128-bit or 160-bit hash values or DBMS_OBFUSCATION_TOOLKIT.MD5 with 128-bit hash values.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • How hashtable handle hash collision?

    From Class Hashtable javadoc
    "in the case of a "hash collision", a single bucket stores multiple entries,
    which must be searched sequentially."
    if a single bucket stores mulitple entries of String object, how
    could mulitple entries be retreived like this situation?
    String s = hasht.get(key)
    I tried a small test myself using two identical hascode "BB" and "Aa" and they get stored in 2 different bucket in hashtable but not mulitple entries as stated at the
    javadoc. Thank you.

    debugger indicatedumm, the hashes are the same so ur right there, you must have read the debugger wrong though
         Entry tab[] = table;
         int hash = key.hashCode();
         int index = (hash & 0x7FFFFFFF) % tab.length;
         for (Entry<K,V> e = tab[index] ; e != null ; e = e.next) {  // right here it is same entry but its like a list inside the bucket
             if ((e.hash == hash) && e.key.equals(key)) {
              V old = e.value;
              e.value = value;
              return old;
    // rehashing stuff removed for post...
         // Creates the new entry.
         Entry<K,V> e = tab[index];
         tab[index] = new Entry<K,V>(hash, key, value, e);
         count++;

  • Java Hash Collision Denial Of Service Vulnerability

    There is Java Hash Collision Denial Of Service Vulnerability according to these sources:
    http://tomcat.10.n6.nabble.com/SECURITY-Apache-Tomcat-and-the-hashtable-collision-DoS-vulnerability-td2405294.html
    http://www.nruns.com/_downloads/advisory28122011.pdf
    http://www.securityfocus.com/bid/51236
    It mentions that Oracle is not going to release the fix for Java. Does anyone knows if Oracle has any plan to release the fix or intend to ever fix it or not?
    Thanks,
    kymeng
    Edited by: user6992787 on Feb 10, 2012 12:08 PM

    I don't really see this as an Oracle problem - more a Tomcat problem. Any collection algorithm will have limitations and in this case the Tomcat team use the Java hashtable to make use of the O(1) performance when the hashes of the keys are effectively random and have accepted the possible worst case O(n^2) performance. Either they should have used a TreeMap with O(nlogn) performance OR they should create their own implementation of Map that that does not permit the DOS attack.
    I have never done any performance comparisons between HashMap and TreeMap but for many years now I pretty much always use a TreeMap since I rarely find performance a significant problem (of course I don't write high throughput applications such as Tomcat). I don't really see how Oracle should be involved in this problem; maybe the Tomcat team should be doing performance comparisons and/or research into algorithms that do not allow this DOS.

  • Using JHS tables and hashing with salt algorithms for Weblogic security

    We are going to do our first enterprise ADF/JHeadstart application. For security part, we are going to do the following:
    1. We will use JHS tables as authentication for ADF security.
    2. We will use JAAS as authentication and Custom as authorization.
    2. We need to use JHeadStart security service screen in our application to manage users, roles and permission, instead of doing users/groups management within Weblogic.
    3. We will create new Weblogic SQL Authentication Provider.
    4. We will store salt with password in the database table.
    5. We will use Oracle MDS.
    There are some blogs online giving detail steps on how to create Weblogic SQL Authentication Provider and use JHS tables as authentication for ADF security. I am not sure about the implementation of hashing with salt algorithms, as ideally we'd like to use JHS security service screen in the application to manage users, roles and permission, not using Weblogic to do the users/groups management. We are going to try JMX client to interact with Weblogic API, looks like it is a flexiable approach. Does anybody have experience on working with JMX, SQL Authentication Provider and hashing with salt algorithms? Just want to make sure we are on the right track.
    Thanks,
    Sarah

    To be clear, we are planning on using a JMX client at the Entity level using custom JHS entitiy classes.
    BradW working with Sarah

  • Creation of Sorted and Standard and Hashed Internal Tables ..

    Hi ..
    Pls  specify me.. how to create .. sorted ,Standard and Hashed Internal Tables...
    pls give me  the full code  regarding ...this ..
    Thnks

    Standard tables
    This is the most appropriate type if you are going to address the individual table entries using the index. Index access is the quickest possible access. You should fill a standard table by appending lines (ABAP APPEND statement), and read, modify and delete entries by specifying the index (INDEX option with the relevant ABAP command). The access time for a standard table increases in a linear relationship with the number of table entries. If you need key access, standard tables are particularly useful if you can fill and process the table in separate steps. For example, you could fill the table by appending entries, and then sort it. If you use the binary search option with key access, the response time is logarithmically proportional to the number of table entries.
    Sorted tables
    This is the most appropriate type if you need a table which is sorted as you fill it. You fill sorted tables using the INSERT statement. Entries are inserted according to the sort sequence defined through the table key. Any illegal entries are recognized as soon as you try to add them to the table. The response time for key access is logarithmically proportional to the number of table entries, since the system always uses a binary search. Sorted tables are particularly useful for partially sequential processing in a LOOP if you specify the beginning of the table key in the WHERE condition.
    Hashed tables
    This is the most appropriate type for any table where the main operation is key access. You cannot access a hashed table using its index. The response time for key access remains constant, regardless of the number of table entries. Like database tables, hashed tables always have a unique key. Hashed tables are useful if you want to construct and use an internal table which resembles a database table or for processing large amounts of data.
    Special Features of Standard Tables
    Unlike sorted tables, hashed tables, and key access to internal tables, which were only introduced in Release 4.0, standard tables already existed several releases previously. Defining a line type, table type, and tables without a header line have only been possible since Release 3.0. For this reason, there are certain features of standard tables that still exist for compatibility reasons.
    Standard Tables Before Release 3.0
    Before Release 3.0, internal tables all had header lines and a flat-structured line type. There were no independent table types. You could only create a table object using the OCCURS addition in the DATA statement, followed by a declaration of a flat structure:
    DATA: BEGIN OF <itab> OCCURS <n>,
    <fi> ...
    END OF <itab>.
    This statement declared an internal table <itab> with the line type defined following the OCCURS addition. Furthermore, all internal tables had header lines.
    The number <n> in the OCCURS addition had the same meaning as in the INITIAL SIZE addition from Release 4.0. Entering ‘0’ had the same effect as omitting the INITIAL SIZE addition. In this case, the initial size of the table is determined by the system.
    The above statement is still possible in Release 4.0, and has roughly the same function as the following statements:
    TYPES: BEGIN OF <itab>,
    <fi> ...,
    END OF <itab>.
    DATA <itab> TYPE STANDARD TABLE OF <itab>
    WITH NON-UNIQUE DEFAULT KEY
    INITIAL SIZE <n>
    WITH HEADER LINE.
    In the original statement, no independent data type <itab> is created. Instead, the line type only exists as an attribute of the data object <itab>.
    Standard Tables From Release 3.0
    Since Release 3.0, it has been possible to create table types using
    TYPES <t> TYPE|LIKE <linetype> OCCURS <n>.
    and table objects using
    DATA <itab> TYPE|LIKE <linetype> OCCURS <n> WITH HEADER LINE.
    The effect of the OCCURS addition is to construct a standard table with the data type <linetype>. The line type can be any data type. The number <n> in the OCCURS addition has the same meaning as before Release 3.0. Before Release 4.0, the key of an internal table was always the default key, that is, all non-numeric fields that were not themselves internal tables.
    The above statements are still possible in Release 4.0, and have the same function as the following statements:
    TYPES|DATA <itab> TYPE|LIKE STANDARD TABLE OF <linetype>
    WITH NON-UNIQUE DEFAULT KEY
    INITIAL SIZE <n>
    WITH HEADER LINE.
    They can also be replaced by the following statements:
    Standard Tables From Release 4.0
    When you create a standard table, you can use the following forms of the TYPES and DATA statements. The addition INITIAL SIZE is also possible in all of the statements. The addition WITH HEADER LINE is possible in the DATA statement.
    Standard Table Types
    Generic Standard Table Type:
    TYPES <itab> TYPE|LIKE STANDARD TABLE OF <linetype>.
    The table key is not defined.
    Fully-Specified Standard Table Type:
    TYPES <itab> TYPE|LIKE STANDARD TABLE OF <linetype>
    WITH NON-UNIQUE <key>.
    The key of a fully-specified standard table is always non-unique.
    Standard Table Objects
    Short Forms of the DATA Statement :
    DATA <itab> TYPE|LIKE STANDARD TABLE OF <linetype>.
    DATA <itab> TYPE|LIKE STANDARD TABLE OF <linetype>
    WITH DEFAULT KEY.
    Both of these DATA statements are automatically completed by the system as follows:
    DATA <itab> TYPE|LIKE STANDARD TABLE OF <linetype>
    WITH NON-UNIQUE DEFAULT KEY.
    The purpose of the shortened forms of the DATA statement is to keep the declaration of standard tables, which are compatible with internal tables from previous releases, as simple as possible. When you declare a standard table with reference to the above type, the system automatically adopts the default key as the table key.
    Fully-Specified Standard Tables:
    DATA <itab> TYPE|LIKE STANDARD TABLE OF <linetype>
    WITH NON-UNIQUE <key>.
    The key of a standard table is always non-unique.
    Internal table objects
    Internal tables are dynamic variable data objects. Like all variables, you declare them using the DATA statement. You can also declare static internal tables in procedures using the STATICS statement, and static internal tables in classes using the CLASS-DATA statement. This description is restricted to the DATA statement. However, it applies equally to the STATICS and CLASS-DATA statements.
    Reference to Declared Internal Table Types
    Like all other data objects, you can declare internal table objects using the LIKE or TYPE addition of the DATA statement.
    DATA <itab> TYPE <type>|LIKE <obj> WITH HEADER LINE.
    Here, the LIKE addition refers to an existing table object in the same program. The TYPE addition can refer to an internal type in the program declared using the TYPES statement, or a table type in the ABAP Dictionary.
    You must ensure that you only refer to tables that are fully typed. Referring to generic table types (ANY TABLE, INDEX TABLE) or not specifying the key fully is not allowed (for exceptions, refer to Special Features of Standard Tables).
    The optional addition WITH HEADER line declares an extra data object with the same name and line type as the internal table. This data object is known as the header line of the internal table. You use it as a work area when working with the internal table (see Using the Header Line as a Work Area). When you use internal tables with header lines, you must remember that the header line and the body of the table have the same name. If you have an internal table with header line and you want to address the body of the table, you must indicate this by placing brackets after the table name (<itab>[]). Otherwise, ABAP interprets the name as the name of the header line and not of the body of the table. You can avoid this potential confusion by using internal tables without header lines. In particular, internal tables nested in structures or other internal tables must not have a header line, since this can lead to ambiguous expressions.
    TYPES VECTOR TYPE SORTED TABLE OF I WITH UNIQUE KEY TABLE LINE.
    DATA: ITAB TYPE VECTOR,
    JTAB LIKE ITAB WITH HEADER LINE.
    MOVE ITAB TO JTAB. <- Syntax error!
    MOVE ITAB TO JTAB[].
    The table object ITAB is created with reference to the table type VECTOR. The table object JTAB has the same data type as ITAB. JTAB also has a header line. In the first MOVE statement, JTAB addresses the header line. Since this has the data type I, and the table type of ITAB cannot be converted into an elementary type, the MOVE statement causes a syntax error. The second MOVE statement is correct, since both operands are table objects.
    plz reward if useful

  • Generally when does optimizer use nested loop and Hash joins  ?

    Version: 11.2.0.3, 10.2
    Lets say I have a table called ORDER and ORDER_DETAIL.
    ORDER_DETAIL is the child table of ORDERS .
    This is what I understand about Nested Loop:
    When we join ORDER AND ORDER_DETAIL tables oracle will form a 'nested loop' in which for each order_ID in ORDER table (outer loop), oracle will look for corresponding multiple ORDER_IDs in the ORDER_DETAIL table.
    Is nested loop used when the driving table (ORDER in this case) is smaller than the child table (ORDER_DETAIL) ?
    Is nested loop more likely to use Indexes in general ?
    How will these two tables be joined using Hash joins ?
    When is it ideal to use hash joins  ?

    Your description of a nested loop is correct.
    The overall rule is that Oracle will use the plan that it calculates to be, in general, fastest. That mostly means fewest I/O's, but there are various factors that adjust its calculation - e.g. it expects index blocks to be cached, multiple reads entries in an index may reference the same block, full scans get multiple blocks per I/O. It also has CPU cost calculations, but they generally only become significant with function / package calls or domain indexes (spatial, text, ...).
    Nested loop with an index will require one indexed read of the child table per outer table row, so its I/O cost is roughly twice the number of rows expected to match the where clause conditions on the outer table.
    A hash join reads the of the smaller table into a hash table then matches the rows from the larger table against the hash table, so its I/O cost is the cost of a full scan of each table (unless the smaller table is too big to fit in a single in-memory hash table). Hash joins generally don't use indexes - it doesn't use the index to look up each result. It can use an index as a data source, as a narrow version of the table or a way to identify the rows satisfying the other where clause conditions.
    If you are processing the whole of both tables, Oracle is likely to use a hash join, and be very fast and efficient about it.
    If your where clause restricts it to a just few rows from the parent table and a few corresponding rows from the child table, and you have an index Oracle is likely to use a nested loops solution.
    If the tables are very small, either plan is efficient - you may be surprised by its choice.
    Don't be worry about plans with full scans and hash joins - they are usually very fast and efficient. Often bad performance comes from having to do nested loop lookups for lots of rows.

  • What is the difference between standard,sorted and hash table

    <b>can anyone say what is the difference between standard,sorted and hash tabl</b>

    Hi,
    Standard Tables:
    Standard tables have a linear index. You can access them using either the index or the key. If you use the key, the response time is in linear relationship to the number of table entries. The key of a standard table is always non-unique, and you may not include any specification for the uniqueness in the table definition.
    This table type is particularly appropriate if you want to address individual table entries using the index. This is the quickest way to access table entries. To fill a standard table, append lines using the (APPEND) statement. You should read, modify and delete lines by referring to the index (INDEX option with the relevant ABAP command). The response time for accessing a standard table is in linear relation to the number of table entries. If you need to use key access, standard tables are appropriate if you can fill and process the table in separate steps. For example, you can fill a standard table by appending records and then sort it. If you then use key access with the binary search option (BINARY), the response time is in logarithmic relation to
    the number of table entries.
    Sorted Tables:
    Sorted tables are always saved correctly sorted by key. They also have a linear key, and, like standard tables, you can access them using either the table index or the key. When you use the key, the response time is in logarithmic relationship to the number of table entries, since the system uses a binary search. The key of a sorted table can be either unique, or non-unique, and you must specify either UNIQUE or NON-UNIQUE in the table definition. Standard tables and sorted tables both belong to the generic group index tables.
    This table type is particularly suitable if you want the table to be sorted while you are still adding entries to it. You fill the table using the (INSERT) statement, according to the sort sequence defined in the table key. Table entries that do not fit are recognised before they are inserted. The response time for access using the key is in logarithmic relation to the number of
    table entries, since the system automatically uses a binary search. Sorted tables are appropriate for partially sequential processing in a LOOP, as long as the WHERE condition contains the beginning of the table key.
    Hashed Tables:
    Hashes tables have no internal linear index. You can only access hashed tables by specifying the key. The response time is constant, regardless of the number of table entries, since the search uses a hash algorithm. The key of a hashed table must be unique, and you must specify UNIQUE in the table definition.
    This table type is particularly suitable if you want mainly to use key access for table entries. You cannot access hashed tables using the index. When you use key access, the response time remains constant, regardless of the number of table entries. As with database tables, the key of a hashed table is always unique. Hashed tables are therefore a useful way of constructing and
    using internal tables that are similar to database tables.
    Regards,
    Ferry Lianto

  • Actual difference between a standard , sorted and hashed atble

    hi ,
    1. what is the actual difference between a
       standard,sorted and hashed table ? and
    2. where and when these are actually used and applied ?
       provide explanation with an example ....

    hi
    good
    Standard Internal Tables
    Standard tables have a linear index. You can access them using either the index or the key. If you use the key, the response time is in linear relationship to the number of table entries. The key of a standard table is always non-unique, and you may not include any specification for the uniqueness in the table definition.
    This table type is particularly appropriate if you want to address individual table entries using the index. This is the quickest way to access table entries. To fill a standard table, append lines using the (APPEND) statement. You should read, modify and delete lines by referring to the index (INDEX option with the relevant ABAP command).  The response time for accessing a standard table is in linear relation to the number of table entries. If you need to use key access, standard tables are appropriate if you can fill and process the table in separate steps. For example, you can fill a standard table by appending records and then sort it. If you then use key access with the binary search option (BINARY), the response time is in logarithmic relation to
    the number of table entries.
    Sorted Internal Tables
    Sorted tables are always saved correctly sorted by key. They also have a linear key, and, like standard tables, you can access them using either the table index or the key. When you use the key, the response time is in logarithmic relationship to the number of table entries, since the system uses a binary search. The key of a sorted table can be either unique, or non-unique, and you must specify either UNIQUE or NON-UNIQUE in the table definition.  Standard tables and sorted tables both belong to the generic group index tables.
    This table type is particularly suitable if you want the table to be sorted while you are still adding entries to it. You fill the table using the (INSERT) statement, according to the sort sequence defined in the table key. Table entries that do not fit are recognised before they are inserted. The response time for access using the key is in logarithmic relation to the number of
    table entries, since the system automatically uses a binary search. Sorted tables are appropriate for partially sequential processing in a LOOP, as long as the WHERE condition contains the beginning of the table key.
    Hashed Internal Tables
    Hashes tables have no internal linear index. You can only access hashed tables by specifying the key. The response time is constant, regardless of the number of table entries, since the search uses a hash algorithm. The key of a hashed table must be unique, and you must specify UNIQUE in the table definition.
    This table type is particularly suitable if you want mainly to use key access for table entries. You cannot access hashed tables using the index. When you use key access, the response time remains constant, regardless of the number of table entries. As with database tables, the key of a hashed table is always unique. Hashed tables are therefore a useful way of constructing and
    using internal tables that are similar to database tables.
    THANKS
    MRUTYUN

  • Atilde (~) and hash (#) are broken

    I have a 2009 Macbook Pro 13-inch, on which I recently installed Windows 7. I have discovered however that my atilde key and my hash key are both set to the incorrect symbol. On all English keyboards, the atilde key outputs as § or ± with shift held down, while the hash key has been replaced by the pound symbol (£). I am at a loss as to why this would be the case or as to how to fix it. I have changed my keyboard to US International and several other english keyboards, but it doesn't change anything. When changing to a different language (such as Chinese) I can type the normal atilde and hash, but this has obvious limitations. Any help would be greatly appreciated. Thank you.

    Martel101 wrote:
    I have changed my keyboard to US International and several other english keyboards
    Have you tried changing the keyboard setting inside Windows to one that has (Mac) after the name?

  • Sorted and hashed tables

    what happens when duplicate entries are present in sorted and hashed tables?

    Hi,
    Sorted internal tables can be of two types:
    unique or non unique.
    If u enter duplicate records in Sorted tables with unique  it will show error.
    If u enter duplicate records in Sorted tables with  non-unique key it will not show error.
    Hashed tables are with only unique key
    so no way to enter the duplicate records.
    for more information see the following link
    http://help.sap.com/saphelp_nw70/helpdata/en/fc/eb35de358411d1829f0000e829fbfe/content.htm
    Reward if helpful.
    Jagadish

  • Ora_hash - Same hash value for different inputs (Urgent)

    Hi,
    Trying to use ora_hash to join between tables but i noticed that in some cases, when working on different input value the ora_hash function generates same results.
    select ora_hash('oomeroe03|6NU3|LS006P|7884|1|17-JUL-13 13.18.22.528000|0005043|'),ora_hash('GSAHFFXTK|GCQ3|A6253S|12765|1|17-JUL-13 17.26.26.853000|0136423|')
    from dual
    Output value : 1387341941
    Oracle version is 11gR2.
    Thanks

    Why would anyone limit the hash distribution to three buckets ?
    However, one must understand that the default seed is 0.  So one input repeated gets the same hash value unless the seed is changed.
    SQL> select ora_hash(rn) , rn
      2  from
      3  (Select rownum as rn from dual connect by level < 11)
      4  order by rn;
    ORA_HASH(RN)         RN
      2342552567          1
      2064090006          2
      2706503459          3
      3217185531          4
       365452098          5
      1021760792          6
       738226831          7
      3510633070          8
      1706589901          9
      1237562873         10
    10 rows selected.
    SQL> l
      1  select ora_hash(rn) , rn
      2  from
      3  (Select rownum as rn from dual connect by level < 11)
      4* order by rn
    SQL> /
    ORA_HASH(RN)         RN
      2342552567          1
      2064090006          2
      2706503459          3
      3217185531          4
       365452098          5
      1021760792          6
       738226831          7
      3510633070          8
      1706589901          9
      1237562873         10
    10 rows selected.
    SQL> /
    ORA_HASH(RN)         RN
      2342552567          1
      2064090006          2
      2706503459          3
      3217185531          4
       365452098          5
      1021760792          6
       738226831          7
      3510633070          8
      1706589901          9
      1237562873         10
    10 rows selected.
    SQL>
    Hemant K Chitale

  • MD5 and Checksum Collisions ... A problem in APEX?

    There is a glut of information on the internet about the weaknesses of MD5 as a hash calculation. For example, <a href ="http://lookit.typepad.com/lookit/2006/10/a_new_demonstra.html">here</a>, here, and even a number of examples on how to easily generate articles that collide here.
    My questions are:
    1. Why did the APEX team use MD5 when there are better hash options availabe in Oracle's own Obfuscation Toolkit or DBMS_CRYPTO for 10g and above.
    2. What evidence can be given as to why we should not worry about potential collisions in our own APEX applications.
    Thanks!
    Doug

    Doug,
    1. Why did the APEX team use MD5 when there are better hash options available in Oracle's own Obfuscation Toolkit or DBMS_CRYPTO for 10g and above?At the time of the decision, MD5 was deemed to be the best available choice for the versions of Oracle in use then and with the developers' knowledge about the overall adequacy of the algorithm, for the purposes contemplated, based on what was generally accepted wisdom on the topic in the 1998-2000 timeframe.
    2. What evidence can be given as to why we should not worry about potential collisions in our own APEX applications?Rigorously supportable evidence of this nature will have to come from parties better at math than I am (call this qualification Q). But here are some observations, some perhaps useful, some just fun to think about:
    a) When you say "the math behind it points to the fact that 1 in every 2^16th string will collide in some way", I don't know what the entire assertion is, but I think (subject to Q) that 2*64 is a more useful number, being the average number of operations that would be required to generate a collision based on a 128-bit output. But that doesn't change the fact that the algorithm is imperfect, a characteristic of all methods.
    b) For this discussion, I will assume that the real "worry" is about accidental collisions, not mounted attacks.
    c) If we assume that "competing updaters" of a row in a real on-line application environment are more likely than not doing "different work" with the application, albeit against the same row, then the non-zero probability that the first updater's changes to a row will change the row length (the sum of the lengths of the character representation of the affected columns, or the md5 message m1) makes a collision of md5(m1) with md5(m2) less likely than in the canonical case described in the md5 collision risk examples which use messages of equal length. As I understand it, this is due to the large effect of the message length on the resultant hash. Of course, I cannot defend this further because of Q.
    d) The problem addressed by our optimistic locking technique is "lost update prevention", not concurrency control. We want to maximize the cases where updates to single rows made by writer A using simple form pages do not get overwritten by updates made by writer B. The risk of this happening may be greatest when writer A commits a change, then writer B initiates a change and in full view of writer A's changes proceeds to overwrite those changes (update DEPARTMENT_TONER_INVENTORY SET ONHAND = ONHAND - 3 WHERE DEPTNO=4400). Assuming that page validations and table triggers do not prevent the second update due to data consistency checks and the like, writer B's changes will be committed and writer A's changes will be "lost". Whatever machinery (if any) you put in place in the back-end to prevent that from happening is sufficient to handle the md5-collision case as well. Therefore, the md5 problem is not only not a new class of problem, but it's relatively insignifcant because of its high improbability.
    e) Speaking of high improbability, there is an inherent exposure in the implementation of this method in Application Express wherein writer A and writer B initiate changes to a row and obtain identical checksums on the pre-changed row. Then writer A submits the form which proceeds to re-compute the md5 on the row. That computation retards because the session gets swapped out, the table is accessed over a slow database link, whatever. This is time T1. At time T2 writer B submits its form which re-computes the md5 on the row, finds that it matches the md5 on the page and then immediately updates and commits its changes at time T3. Writer A's md5 calculation completed just before writer B's commit. The md5 matches that on the posted page. Writer A's changes then overwrite those just committed by writer B. As improbable as this might seem, I'll bet (subject to Q) that if you had a really high-concurrency workload (which would generate an unacceptably high number of "checksum mismatch" errors) that you would see this timing-based exposure far more (orders of magnitude) often than you would see the md5 collision problem.
    f) In reality, users of applications who get an annoying number (frequency) of "row checksum mismatch" errors will complain loudly that the application is not working for them. They won't put up with it. They need a better architecture or set of business processes that prevent concurrent updates from the front end. The way this is addressed will (steeply) reduce the frequency of concurrent update attempts and by extension will make the md5 collision problem even less likely in that environment.
    g) In conclusion, I think developers should worry about the lost update problem (and solutions for it) more in terms of the application architecture, the data model, processes/procedures that control how functionality within the application is used, etc., and less in terms of the safety-net of row-level optimistic locking implementation methods, the refinement of which may only serve to change the frequency of certain exposures from "once in a blue moon" to "a month of Sundays" (colloquial metrics required by Q).
    Perhaps "Faith Based Locking" would be more apt.
    Scott

  • Classpath and hash Character (#)

    Hi,
    I have some jar files in my java classpath and it looks as if the JVM does not manage to find them when the jar file path contains a hash character (#) (java.lang.NoClassDefFoundError Exception is raised)
    I haven't found anything about this in docs.
    Can anyone help me ?
    Thanks
    Michel

    Maybe not.
    However, once I do not need these classes which are stored in a file path with a hash character, everything works fine.
    You can experiment by yourself following this basic scenario:
    1) store your famous HelloWord.class in a directory containing a hash Character (e.g. C:\tmp\te#st )
    2) run your jvm as follows: java -classpath C:\tmp\te#st HelloWord
    3) you should see the message
    "Exception in thread "main" java.lang.NoClassDefFoundError: HelloWorld"
    if not, tell me. I am completely wrong.
    PS: Enviromment: WindowNT, sdk 1.4.0 or sdk 1.3.1
    Michel

  • Bind join and hash join

    Hi
    I would like to know if there is difference between bind join end hash join.
    For example If I write sql code in My query tool (in Data Federator Query server administrator XI 3.0) it is traduced in a hashjoin(....);If I set system parameters in right way to produce bind join, hashjoin(...) is present again!
    how do i understand the difference?
    bye

    Query to track the holder and waiter info
    People who reach this place for a similar problem can use the above link to find their answer
    Regards,
    Vishal

Maybe you are looking for

  • NASS Auto Loan Questions

    So I recently got approved for NASA membership and CC. I am also toying with the idea of getting a new car or truck. Currently drive a 07 Accord that has been paid of for a few years now. While I love having 0 car payments I kinda want something more

  • Problem worskspaces analysis

    Hi everybody, I have a problem to display my workspaces in BI4.1 SP3 after migrating them from BI XI 3 . I remarked that i have so many traceLog in tomcat that contains some errors , here is the file : the tabs in my workspaces are blanck and refers

  • How Can I Stitch Together Two Large mp4 Files?

    I have two very large mp4 files (close to 1 GB each) and I need to join them together into one mp4 file. Is there a free program that will allow me to do that? I tried using iMovie but there wasn't enough room on my hard drive to create an iMovie pro

  • Fcp on macbook pro  15.4" 4GB

    any thoughts on power with this machine running FCP? I am hoping to jump to 4GB RAM and a 250GB HD. Any comments welcome,thanks

  • Why do I get data charges when I am always connected to Wi Fi

    I had some enormous media charges from AT & T this month and I generally only use my iPhone for phone calls and occassionally for internet when I am at home and connected to Wi Fi. Did not get any kind of explantion from AT & T.  Basically they said