DBMS_CRYPTO.HASH produces a different hash result when I use it in sqlldr?

Hello,
If I execute the following in sqlplus:
SELECT LOWER(SYS.DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(CONCAT('Salt','000000000')),3)) FROM DUAL;
It produces one hash value. However, if I use similar logic in a sqlldr control file:
OPTIONS (SKIP=7)
LOAD DATA
INFILE 'C:\data.csv'
TRUNCATE INTO TABLE information
FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
HOSTNAME          CONSTANT 'Company',
HOSTSERVICENAME     CONSTANT 'LOB',
ACCOUNTNAME,
LASTNAME,
EXTERNAL_ACCOUNT_ID     "LOWER(SYS.DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(CONCAT('Salt',:accountname)),3))"
Where ACCOUNTNAME in the file data.csv is also '000000000'. What am I doing wrong?
Thanks so much,
Jay

user3362629 wrote:
If anyone is interested, the problem was with the column datatype of the table. sqlldr was attempting to load data into a column of type NVARCHAR2. When I changed the column type to VARCHAR2, the hash values from both sqlldr and sqlplus were the same. I'm not sure if this is an Oracle bug or if this expected behavior. This was using Oracle Enterprise Edition 11g.Jay,
that's very interesting. One possible explanation that I could think of: Your SQL*Plus attempt:
If I execute the following in sqlplus:
SELECT LOWER(SYS.DBMS_CRYPTO.HASH(UTL_RAW.CAST_TO_RAW(CONCAT('Salt','000000000')),3)) FROM DUAL;used a VARCHAR2 literal, but the SQL*Loader code probably used a NVARCHAR2 as input to the function. From a RAW data perspective, these two, although representing the same string, could be quite different.
You could try to do the same in SQL*Plus, by using a NVARCHAR2 input to the function. If you want to achieve that using a literal the problem is, that it's not that simple to use a NVARCHAR2 literal. You would need to use an NVARCHAR2 literal (N'000000000') and in addition probably turn on the so called "NCHAR String Literal Replacement", which means that you need to set an environment variable (ORA_NCHAR_LITERAL_REPLACE = TRUE).
For more information regarding these issues, look here:
Text literals: http://download.oracle.com/docs/cd/B28359_01/server.111/b28298/ch7progrunicode.htm#CACHHIFE
NCHAR String Literal Replacement: http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/sql_elements003.htm#SQLRF00218
It's probably far simpler to use a NVARCHAR2 stored somewhere in the database, or use e.g. a concatenation of the NCHR() function to generate a NVARCHAR2 on the fly than dealing with these literal conversion issues mentioned above.
Regards,
Randolf
Oracle related stuff blog:
http://oracle-randolf.blogspot.com/
SQLTools++ for Oracle (Open source Oracle GUI for Windows):
http://www.sqltools-plusplus.org:7676/
http://sourceforge.net/projects/sqlt-pp/

Similar Messages

  • I'm having trouble with something that redirects Google search results when I use Firefox on my PC. It's called the 'going on earth' virus. Do you have a fix that could rectify the vulnerability in your software?

    I'm having trouble with a virus or something which affects Google search results when I use Firefox on my PC ...
    When I search a topic gives me pages of links as normal, but when I click on a link, the page is hijacked to a site called 'www.goingonearth.com' ...
    I've done a separate search and found that other users are affected, but there doesn't seem to be a clear-cut solution ... (Norton, McAfee and Kaspersky don't seem to be able to detect/fix it).
    I'd like to continue using the Firefox/Google combination (nb: the hijack virus also affects IE but not Safari) - do you have a patch/fix that could rectify the vulnerability in your software?
    thanks

    ''' "... vulnerability in your software?" ''' <br />
    And it affects IE, too? Ya probably picked up some malware and you blame it on Firefox.
    Install, update, and run these programs in this order. They are listed in order of efficacy.<br />'''''(Not all programs detect the same Malware, so you may need to run them all to solve your problem.)''''' <br />These programs are all free for personal use, but some have limited functionality in the "free mode" - but those are features you really don't need to find and remove the problem that you have.<br />
    ''Note: If your Malware infection is bad enough and you are mis-directed to URL's other than what is posted, you may have to use a different PC to download these programs and use a USB stick to transfer them to the afflicted PC.''
    Malwarebytes' Anti-Malware - [http://www.malwarebytes.org/mbam.php] <br />
    SuperAntispyware - [http://www.superantispyware.com/] <br />
    AdAware - [http://www.lavasoftusa.com/software/adaware/] <br />
    Spybot Search & Destroy - [http://www.safer-networking.org/en/index.html] <br />
    Windows Defender: Home Page - [http://www.microsoft.com/windows/products/winfamily/defender/default.mspx]<br />
    Also, if you have a search engine re-direct problem, see this:<br />
    http://deletemalware.blogspot.com/2010/02/remove-google-redirect-virus.html
    If these don't find it or can't clear it, post in one of these forums for specialized malware removal help: <br />
    [http://www.spywarewarrior.com/index.php] <br />
    [http://forum.aumha.org/] <br />
    [http://www.spywareinfoforum.com/] <br />
    [http://bleepingcomputer.com]

  • I have iWork on my new MacBookPro and it works fine.  But I had the Beta iWorks in iCloud on my older desktop.  How do I get the Beta iWorks off my iCloud because it is different (and better) when I use it on my MacBookPro?

    I have iWork on my new MacBookPro and it works fine.  But I had the Beta iWorks in iCloud on my older desktop.  How do I get the Beta iWorks off my iCloud because it is different (and better) when I use it on my MacBookPro?  Shouldn't the iCloud be the same as the MacBookPro iWorks

    billjudy wrote:
      Shouldn't the iCloud be the same as the MacBookPro iWorks
    No, they are different apps.  The beta one in icloud is for use by PC's as well as Mac's via a web browser and lets users collaborate on documents.  It can't removed.  Just ignore it if you have no use for it.

  • How can I narrow down results when I use 'command f' or 'spotlight' ?

    When I use 'command f' or 'spotlight'  I can be very specific with what i'm looking for and I will get soooo many results that the search is almost not worth it.

    System Preferences - Spotlight - Search Results

  • Different CCR result when compared to OM17 result

    Hello All ,
                          What currently we have seen is when we manually delete stock in SAP APO using RLC DELETE and run CCR report it shows that the Stock is INCONSISTENT but when we run the OM17 for STOCKS it shows that STOCK is CONSISTENT.
    What CCR shows is acceptable as we have deleted stock and it shows inconcistent but why does OM17 for stock show it as consistent . Is this because of some bug . Plz help me out with this .
    Regards,
    Aryan

    Aryan,
    I believe you have misunderstood the purpose of OM17.  This program checks for internal LC consistency (within itself) and for consistency between LC and the various SCM table-based databases.
    CCR on the other hand checks for consistency between SCM transactional data and ERP transactional data.
    Both reports should be run periodically, since they check for different things.
    Best Regards,
    DB49

  • Different results when fetching using Datamaps vs. Limitmaps

    Hello,
    I'm currently using dbms_aw package to execute DML commands into AW's and using the OLAP_TABLE Function to access data. I'm encountering some unexepected behavior when trying to issue a "sort".
    Please consider the following:
    1. First I issue the command below and receive the following results:
    SQL> exec dbms_aw.execute('lmt Product to first 5;rpr w 100 PA.SHORTLABELF')
    PRODUCT PA.SHORTLABELF
    TOTALPROD Total Product
    FMGLC Gum Liq Ctr
    FMGLC24 Gum Liq Ctr 24s
    FMGLC24RP Gum Liq Ctr 24s RP
    FMMXS Mints X-S
    2. Next I sort the dimension values in status in ascending order by their shortlabel (simply a text variable dimensioned by my product dim.
    SQL> exec dbms_aw.execute('sort PRODUCT a PA.SHORTLABELF')
    PL/SQL procedure successfully completed.
    3. I looked at the results of the sort I get (as expected)
    SQL> exec dbms_aw.execute('rpr w 100 PA.SHORTLABELF')
    PRODUCT PA.SHORTLABELF
    FMGLC Gum Liq Ctr
    FMGLC24 Gum Liq Ctr 24s
    FMGLC24RP Gum Liq Ctr 24s RP
    FMMXS Mints X-S
    TOTALPROD Total Product
    4. Then I enter the following commmand using the OLAP_TABLE function and LimitMaps I receive the correct set of returned values however my sort has not been applied:
    SQL> SELECT * FROM TABLE(OLAP_TABLE('fmcalc DURATION SESSION','','',
    2 'MEASURE Time1 FROM PA.SHORTLABELF '))
    TIME1
    Total Product
    Gum Liq Ctr
    Gum Liq Ctr 24s
    Gum Liq Ctr 24s RP
    Mints X-S
    5. Finally, if I decide to go back and use datamaps in conjunction with the OLAP_TABLE function my sorting has "stuck"
    SQL> SELECT Text1 FROM TABLE(OLAP_TABLE('fmcalc DURATION SESSION','TempTable','fetch PA.SHORTLABELF', ''))
    Text1
    Gum Liq Ctr
    Gum Liq Ctr 24s
    Gum Liq Ctr 24s RP
    Mints X-S
    Total Product
    So, I guess the workaround to the using limitmaps is to use datamaps. My question is why does this behavior occur when using limitmaps and are there other scenrios where similiar behaviour might occur? My main reason is that during my development as a rule I have tried to use limitmaps.
    thanks
    brad

    Hi Brad,
    The short answer here is that SQL does not guarantee the order of rows unless it is explicitly sorted (at the SQL level). From that point of view, you could do something like
    exec dbms_aw.execute('lmt Product to first 5;rpr w 100 PA.SHORTLABELF')
    SELECT * FROM TABLE(OLAP_TABLE('fmcalc DURATION SESSION','','',
    2 'MEASURE Time1 FROM PA.SHORTLABELF ')) order by TIME1 ascending;
    (The reason this is happening is the code responsible for returning rows in the OLAP_TABLE limit map implementation is tuned to return data in the most efficient order.)
    I hope this helps,
    Ekrem

  • Different PS results when pulled from bridge vs LR

    I pull the same file from Lightroom and Bridge into Photoshop.  I run the same adjustments (color), and the results are completely different.  Why?

    Hi Brad,
    The short answer here is that SQL does not guarantee the order of rows unless it is explicitly sorted (at the SQL level). From that point of view, you could do something like
    exec dbms_aw.execute('lmt Product to first 5;rpr w 100 PA.SHORTLABELF')
    SELECT * FROM TABLE(OLAP_TABLE('fmcalc DURATION SESSION','','',
    2 'MEASURE Time1 FROM PA.SHORTLABELF ')) order by TIME1 ascending;
    (The reason this is happening is the code responsible for returning rows in the OLAP_TABLE limit map implementation is tuned to return data in the most efficient order.)
    I hope this helps,
    Ekrem

  • Different results when I use the debug trace mode

    Hi,
    I'm using LabView to configure test subject equipment via SNMP messaging. During the process and before each reading I need to re-calibrate my spectrum analyzer for things like center frequency and channel bandwidth. To do this I have a function which send out the SNMP get request to an SNMP server.
    Currently I have the same vi appearing in a sequence loop a couple of times. When I run the program in debug mode everything is good. The first step in the sequence gets me a valid center frequency and the second delivers a valid channel bandwidth. However if I turn off the little light bulb (debug tool) I get a valid center frequency but the channel bandwidth returned to the calling vi is the center
    frequency.
    I've looked at the messages comming back from my SNMP server and they are correct. I've tried setting booleans which would hopefully ensure the correct message is delivered but no.... I even tried timers all over the place to slow things down.
    I'm running LabView 5.1 on an NT box with a P3 500 processor.
    This problem may have something to do with the same vi being called in the same sequence loop more than once??
    Any help would be apreciated.
    Regards
    Mike Gaskin

    You could make it reentrant. And make sure that they execute in right order.
    "Michael Gaskin" wrote in message
    news:[email protected]..
    > Hi,
    >
    > I'm using LabView to configure test subject equipment via SNMP messaging.
    During the process and before each reading I need to re-calibrate my
    spectrum analyzer for things like center frequency and channel bandwidth.
    To do this I have a function which send out the SNMP get request to an SNMP
    server.
    >
    > Currently I have the same vi appearing in a sequence loop a couple of
    times. When I run the program in debug mode everything is good. The first
    step in the sequence gets me a valid center frequency and the second
    delivers a valid channel bandwidth. However if I turn off the little light
    bulb (d
    ebug tool) I get a valid center frequency but the channel bandwidth
    returned to the calling vi is the center frequency.
    >
    > I've looked at the messages comming back from my SNMP server and they are
    correct. I've tried setting booleans which would hopefully ensure the
    correct message is delivered but no.... I even tried timers all over the
    place to slow things down.
    >
    > I'm running LabView 5.1 on an NT box with a P3 500 processor.
    >
    > This problem may have something to do with the same vi being called in
    the same sequence loop more than once??
    >
    > Any help would be apreciated.
    >
    > Regards
    > Mike Gaskin
    >

  • Why do I get this result when I use split

    Hello guys
    I have created a script that looking for the active path,
    var myPath = (File($.fileName).parent.parent.parent.fullName);
    alert(myPath)
    I recieve "/xxxxx/xxxx/xxxx/341542"
    the only thing I am interested of in this information it's the number,
    so I added on the code
    var myPath = (File($.fileName).parent.parent.parent.fullName).split("xxxx/xxxx/xxxx/");
    the problem is
    I recieve ",341542"
    why do I recieve a comma before the number??
    there is no comma. in the path.

    Laubender,
    in my opinion on Mac OSX  path.fullname also get the folderpath separated with "/"
    Isn't it so?
    But back to the problem.
    The best way is to use name instead of fullname:
    alert(decodeURI(File($.fileName).parent.parent.parent.name))
    Otherwise with the match function:
    If there are no digits at the end of the path - the result is null and match fails.
    I'm with Trevor and Kai Rübsamen: .match(/[\/:]\d+$/) or the split method should be the best in this case.

  • Not able to get all the results if i use "get-spdeletedsite" command

    Hi everyone,
    I am not able to get all the results when I use the below powershell command.
    get-spdeletedsite -limit all
    It is giving me only one site, but infact we have more than 100 deleted sites.
    If I use the normal command like below it is giving the below output
    get-spdeletedsite
    Can any one tell me, how can I get all the deleted sites ?
    Best Regards
    Anil Alladi

    Hi  Anil,
    For the Limit parameter of Get-SPDeletedSite cmdlet, when we specify its value to ALL, it  will return all site collections for the given scope.
    For your scenario, the script “Get-SPDeletedSite –Limit ALL” will return all deleted site collections in your SharePoint environment.
    If you want to get all deleted site, you can export the result to csv as below:
    Get-SPDeletedSite | export-csv out.csv -notypeinformation
    Reference:
    http://technet.microsoft.com/en-us/library/hh286316(v=office.15).aspx
    Best Regards,
    Eric
    Eric Tao
    TechNet Community Support

  • SQL Query produces different results when inserting into a table

    I have an SQL query which produces different results when run as a simple query to when it is run as an INSERT INTO table SELECT ...
    The query is:
    SELECT   mhldr.account_number
    ,        NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
    ,        COUNT(1) num_apps
    FROM     app_parties ap
    SELECT   accsta.account_number
    ,        actply.party_sysid
    ,        RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
    FROM     activity_players actply
    ,        account_status accsta
    WHERE    1 = 1
    AND      actply.table_id (+) = 'ACCGRP'
    AND      actply.acttyp_code (+) = 'MHLDRM'
    AND      NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
    AND      actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
    ) mhldr
    WHERE    1 = 1
    AND      ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
    GROUP BY mhldr.account_number;      The INSERT INTO code:
    TRUNCATE TABLE applicant_summary;
    INSERT /*+ APPEND */
    INTO     applicant_summary
    (  account_number
    ,  main_borrower_status
    ,  num_apps
    SELECT   mhldr.account_number
    ,        NVL(MAX(DECODE(ap.party_sysid, mhldr.party_sysid,ap.empcat_code,NULL)),'UNKNWN') main_borrower_status
    ,        COUNT(1) num_apps
    FROM     app_parties ap
    SELECT   accsta.account_number
    ,        actply.party_sysid
    ,        RANK() OVER (PARTITION BY actply.table_sysid, actply.loanac_latype_code ORDER BY start_date, SYSID) ranking
    FROM     activity_players actply
    ,        account_status accsta
    WHERE    1 = 1
    AND      actply.table_id (+) = 'ACCGRP'
    AND      actply.acttyp_code (+) = 'MHLDRM'
    AND      NVL(actply.loanac_latype_code (+),TO_NUMBER(SUBSTR(accsta.account_number,9,2))) = TO_NUMBER(SUBSTR(accsta.account_number,9,2))
    AND      actply.table_sysid (+) = TO_NUMBER(SUBSTR(accsta.account_number,1,8))
    ) mhldr
    WHERE    1 = 1
    AND      ap.lenapp_account_number (+) = TO_NUMBER(SUBSTR(mhldr.account_number,1,8))
    GROUP BY mhldr.account_number;      When run as a query, this code consistently returns 2 for the num_apps field (for a certain group of accounts), but when run as an INSERT INTO command, the num_apps field is logged as 1. I have secured the tables used within the query to ensure that nothing is changing the data in the underlying tables.
    If I run the query as a cursor for loop with an insert into the applicant_summary table within the loop, I get the same results in the table as I get when I run as a stand alone query.
    I would appreciate any suggestions for what could be causing this odd behaviour.
    Cheers,
    Steve
    Oracle database details:
    Oracle Database 10g Release 10.2.0.2.0 - Production
    PL/SQL Release 10.2.0.2.0 - Production
    CORE 10.2.0.2.0 Production
    TNS for 32-bit Windows: Version 10.2.0.2.0 - Production
    NLSRTL Version 10.2.0.2.0 - Production
    Edited by: stevensutcliffe on Oct 10, 2008 5:26 AM
    Edited by: stevensutcliffe on Oct 10, 2008 5:27 AM

    stevensutcliffe wrote:
    Yes, using COUNT(*) gives the same result as COUNT(1).
    I have found another example of this kind of behaviour:
    Running the following INSERT statements produce different values for the total_amount_invested and num_records fields. It appears that adding the additional aggregation (MAX(amount_invested)) is causing problems with the other aggregated values.
    Again, I have ensured that the source data and destination tables are not being accessed / changed by any other processes or users. Is this potentially a bug in Oracle?Just as a side note, these are not INSERT statements but CTAS statements.
    The only non-bug explanation for this behaviour would be a potential query rewrite happening only under particular circumstances (but not always) in the lower integrity modes "trusted" or "stale_tolerated". So if you're not aware of any corresponding materialized views, your QUERY_REWRITE_INTEGRITY parameter is set to the default of "enforced" and your explain plan doesn't show any "MAT_VIEW REWRITE ACCESS" lines, I would consider this as a bug.
    Since you're running on 10.2.0.2 it's not unlikely that you hit one of the various "wrong result" bugs that exist(ed) in Oracle. I'm aware of a particular one I've hit in 10.2.0.2 when performing a parallel NESTED LOOP ANTI operation which returned wrong results, but only in parallel execution. Serial execution was showing the correct results.
    If you're performing parallel ddl/dml/query operations, try to do the same in serial execution to check if it is related to the parallel feature.
    You could also test if omitting the "APPEND" hint changes anything but still these are just workarounds for a buggy behaviour.
    I suggest to consider installing the latest patch set 10.2.0.4 but this requires thorough testing because there were (more or less) subtle changes/bugs introduced with [10.2.0.3|http://oracle-randolf.blogspot.com/2008/02/nasty-bug-introduced-with-patch-set.html] and [10.2.0.4|http://oracle-randolf.blogspot.com/2008/04/overview-of-new-and-changed-features-in.html].
    You could also open a SR with Oracle and clarify if there is already a one-off patch available for your 10.2.0.2 platform release. If not it's quite unlikely that you are going to get a backport for 10.2.0.2.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Two different HASH GROUP BY in execution plan

    Hi ALL;
    Oracle version
    select *From v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
    PL/SQL Release 11.1.0.7.0 - Production
    CORE    11.1.0.7.0      Production
    TNS for Linux: Version 11.1.0.7.0 - Production
    NLSRTL Version 11.1.0.7.0 - ProductionSQL
    select company_code, account_number, transaction_id,
    decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,
    (last_day(to_date('04/21/2010','MM/DD/YYYY')) - min(z.accounting_date) ) age,sum(z.amount)
    from
         select /*+ PARALLEL(use, 2) */    company_code,substr(account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,use.amount,use.accounting_date
         from financials.unbalanced_subledger_entries use
         where use.accounting_date >= to_date('04/21/2010','MM/DD/YYYY')
         and use.accounting_date < to_date('04/21/2010','MM/DD/YYYY') + 1
    UNION ALL
         select /*+ PARALLEL(se, 2) */  company_code, substr(se.account_number, 1, 5) account_number,transaction_id,
         decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type) transaction_id_type,se.amount,se.accounting_date
         from financials.temp2_sl_snapshot_entries se,financials.account_numbers an
         where se.account_number = an.account_number
         and an.subledger_type in ('C', 'AC')
    ) z
    group by company_code,account_number,transaction_id,decode(transaction_id_type, 'CollectionID', 'SettlementGroupID', transaction_id_type)
    having abs(sum(z.amount)) >= 0.01explain plan
    Plan hash value: 1993777817
    | Id  | Operation                      | Name                         | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
    |   0 | SELECT STATEMENT               |                              |       |       | 76718 (100)|          |        |      |            |
    |   1 |  PX COORDINATOR                |                              |       |       |            |          |        |      |            |
    |   2 |   PX SEND QC (RANDOM)          | :TQ10002                     |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | P->S | QC (RAND)  |
    |*  3 |    FILTER                      |                              |       |       |            |          |  Q1,02 | PCWC |            |
    |   4 |     HASH GROUP BY              |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | PCWP |            |
    |   5 |      PX RECEIVE                |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,02 | PCWP |            |
    |   6 |       PX SEND HASH             | :TQ10001                     |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,01 | P->P | HASH       |
    |   7 |        HASH GROUP BY           |                              |    15M|  2055M| 76718   (2)| 00:15:21 |  Q1,01 | PCWP |            |
    |   8 |         VIEW                   |                              |    15M|  2055M| 76116   (1)| 00:15:14 |  Q1,01 | PCWP |            |
    |   9 |          UNION-ALL             |                              |       |       |            |          |  Q1,01 | PCWP |            |
    |  10 |           PX BLOCK ITERATOR    |                              |    11 |   539 |  1845   (1)| 00:00:23 |  Q1,01 | PCWC |            |
    |* 11 |            TABLE ACCESS FULL   | UNBALANCED_SUBLEDGER_ENTRIES |    11 |   539 |  1845   (1)| 00:00:23 |  Q1,01 | PCWP |            |
    |* 12 |           HASH JOIN            |                              |    15M|   928M| 74270   (1)| 00:14:52 |  Q1,01 | PCWP |            |
    |  13 |            BUFFER SORT         |                              |       |       |            |          |  Q1,01 | PCWC |            |
    |  14 |             PX RECEIVE         |                              |    21 |   210 |     2   (0)| 00:00:01 |  Q1,01 | PCWP |            |
    |  15 |              PX SEND BROADCAST | :TQ10000                     |    21 |   210 |     2   (0)| 00:00:01 |        | S->P | BROADCAST  |
    |* 16 |               TABLE ACCESS FULL| ACCOUNT_NUMBERS              |    21 |   210 |     2   (0)| 00:00:01 |        |      |            |
    |  17 |            PX BLOCK ITERATOR   |                              |    25M|  1250M| 74183   (1)| 00:14:51 |  Q1,01 | PCWC |            |
    |* 18 |             TABLE ACCESS FULL  | TEMP2_SL_SNAPSHOT_ENTRIES    |    25M|  1250M| 74183   (1)| 00:14:51 |  Q1,01 | PCWP |            |
    Predicate Information (identified by operation id):
       3 - filter(ABS(SUM(SYS_OP_CSR(SYS_OP_MSR(SUM("Z"."AMOUNT"),MIN("Z"."ACCOUNTING_DATE")),0)))>=.01)
      11 - access(:Z>=:Z AND :Z<=:Z)
           filter(("USE"."ACCOUNTING_DATE"<TO_DATE(' 2010-04-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "USE"."ACCOUNTING_DATE">=TO_DATE(' 2010-04-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss')))
      12 - access("SE"."ACCOUNT_NUMBER"="AN"."ACCOUNT_NUMBER")
      16 - filter(("AN"."SUBLEDGER_TYPE"='AC' OR "AN"."SUBLEDGER_TYPE"='C'))
      18 - access(:Z>=:Z AND :Z<=:Z)I have few doubts regarding this execution plan and i am sure my questions would get answered here.
    Q-1: Why am i getting two different HASH GROUP BY operations (Operation id 4 & 7) even though there is only a single GROUP BY clause ? Is that due to UNION ALL operator that is merging two different row sources and HASH GROUP BY is being applied on both of them individually ?
    Q-2: What does 'BUFFER SORT' (Operation id 13) indicate ? Some time i got this operation and sometime i am not. For some other queries, i have observing around 10GB TEMP space and high cost against this operation. So just curious about whether it is really helpful ? if no, how to avoid that ?
    Q-3: Under PREDICATE Section, what does step 18 suggest ? I am not using any filter like this ? access(:Z>=:Z AND :Z<=:Z)

    aychin wrote:
    Hi,
    About BUFFER SORT, first of all it is not specific for Parallel Executions. This step in the plan indicates that internal sorting have a place. It doesn't mean that rows will be returned sorted, in other words it doesn't guaranty that rows will be sorted in resulting row set, because it is not the main purpose of this operation. I've previously suggested that the "buffer sort" should really simply say "buffering", but that it hijacks the buffering mechanism of sorting and therefore gets reported completely spuriously as a sort. (see http://jonathanlewis.wordpress.com/2006/12/17/buffer-sorts/ ).
    In this case, I think the buffer sort may be a consequence of the broadcast distribution - and tells us that the entire broadcast is being buffered before the hash join starts. It's interesting to note that in the recent of the two plans with a buffer sort the second (probe) table in the hash join seems to be accessed first and broadcast before the first table is scanned to allow the join to occur.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    +"Science is more than a body of knowledge; it is a way of thinking"+
    +Carl Sagan+                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • MD5 implementation gives different hash values from java1.2 to java 1.3

    Hi,
    I have MD5 implementation using Java api. When i run the program using Java 1.2, I get one version of hash value. When i run the same program using Java 1.3/1.4 I get a different version of hash value.
    the hash value generated by java1.2 authecticates correctly with server but the one generated by 1.3/1.4 gives me authetication failure with the server.
    I serious doubt some encoding issue with md5 implementation from java 1.2 to java 1.3/1.4. I would like to know what is the root cause for getting a different hash with java 1.3/1.4
    Thanks
    sunil

    Here is the code, I am converting the string to bytes and then passing it to md.
    public static String getMD(String strInput) {
    /*---- Local Variable Declares ----*/
    String rStr = new String();
    try {
    MessageDigest md = MessageDigest.getInstance("MD5");
    md.update(strInput.getBytes("UTF-8"));
    byte[] challengeResponse = md.digest();
    System.out.println("Len of BYTE ary in JAVA:" + challengeResponse.length);
    rStr = new String(challengeResponse);
    } catch(Exception e) {
    System.out.println("\n\nException while calling getMD\n\n");
    e.printStackTrace();
    } /* end of try catch block */
    return rStr;
    When I run the code using java 1.2, I got the following is octal dump output
    "0000000 037677 135543 076565 063432 136210 040616 004766 177265"
    When I run the code using java 1.3/1.4, I go the following octal dump outpu
    "0000000 037477 037543 076565 063432 037477 040477 004477 037477"

  • I get different result when I paste with mouse and shift+ins

    Sometimes I get different result when I paste the clipboard content with mouse and shift+insert key combination. Why? What can I do to stop this annoyance?

    Just to add a little to the above, there really is no universally applied standard (not even close) for how the primary, secondary, and clipboard buffers should be used.  Any X program can use them however they wish.  But there are some common patterns: selected text should be placed in the primary buffer, and X pastes from the primary buffer with either middle click or shift-ins by default.
    This default can (and unfortunately often is) overriden by clipboard managers - I don't know why they don't stay true to their name and just manage the clipboard buffer, but they often don't.
    <mini rant>with no clipboard tools installed, selected text in any program I have is in the primary selection, and shift+ins or middle click paste from this primary selection.  Ctrl-C/Ctrl-V post to/from the clipboard buffer.  No need for extra tools.  If I install any clipboard manager, this default sanity of X11 goes strait to s(&*Y.  I don't like clipboard managers </mini rant>  <mini praise> X11 is great without any of that cruft added </mini praise>
    Last edited by Trilby (2013-06-15 03:12:39)

  • Do IMAQ Cast Image or IMAQ Linear averages give different results when using different computers that are running under Windows XP ?

    Hello
    I'm currently developping an image processing algorithm using Labview 7.1 and the associated IMAQ Vision tools. After several tests, I found a weird result. Indeed, I put the labview algorithm - including the IMAQ VI on the library to get sure that I use all the time the same VI - on my memory stick and used it on two different computers. I tested the same picture (still in my memory stick) and had two very different results.
    After several hours trying to understand why, I found that there were a difference between the results given by both computers at the very begining of the algorithm. Indeed, I used a JPEG file.
    To open it, I first create an Image with IMAQ Create (U8). Then, I open it.
    Then in my first sub-VI, I use IMAQ Cast Image to be sure that the picture is a U8 grayscale picture.
    Right after that, I use the IMAQ Linear Averages. The results of this VI are different on the two computers.
    I tried several time on the same picture : one computer always give me the same result but the two computers give me a different result. So there is no random variable on the results.
    So my question is : Do IMAQ Cast Image or IMAQ Linear averages give different results when using different computers that are running under Windows XP ?
    My bet is on IMAQ Cast Image but I'm not quite sure and I do not undestand why. The labview and IMAQ are the same on both computers.
    The difference between the two computer are above :
    Computer 1 :
    Pentium(R) 4 CPU 3.20GHz with a RAM of 1Go. The processor is an Intel(R).
    The OS is windows XP Pro 2002
    Computer 2 :
    Pentium(R) 4 CPU 2.80GHz with a RAM of 512Mo. The processor is an Intel(R).
    The OS is windows XP Pro 2002.
    If anybody can help me on this problem, it would be really helpful.
    Regards
    Florence P.

    Hi,
    Indeed it's a strange behaviour, could you send me your VI and your JPEG file, (or another file that reproduces) so that I could check this inthere ?
    I'll then try to find out what's happening.
    Regards
    Richard Keromen
    National Instruments France
    #adMrkt{text-align: center;font-size:11px; font-weight: bold;} #adMrkt a {text-decoration: none;} #adMrkt a:hover{font-size: 9px;} #adMrkt a span{display: none;} #adMrkt a:hover span{display: block;}
    >> Découvrez, en vidéo, les innovations technologiques réalisées en éco-conception

Maybe you are looking for