LABVIEW HMAC-SHA1 implementation
Hello all,
We have need of an HMAC-SHA1 implementation in Labview. Can anyone help?
Thanks,
Josh
Hello Josh,
We have a Community example that uses HMAC-SHA1 that might help you get started.
SHA-1 Cryptographic Hash Function
Searching the Community Code Exchange might be a good place to find additional code that has implemented HMAC-SHA1 in LabVIEW.
Regards,
M. Whitaker
ni.com/support
Similar Messages
-
HMAC (SHA1) key longer than 81 characters not possible?
Not sure whether I'm in the correct forum...
To sign a message for a specific application with HMAC-SHA1 hash I need a 83 character key.
My problem: the function module 'SET_HMAC_KEY' throws the exception "param_length_error". After I've testet with several key length, I found out, that the maximum valid length is 81. Is there any reason for this?
With 3rd party libraries (ie. Python and Javascript) longer keys are working.
Code:
CALL FUNCTION 'SET_HMAC_KEY'
EXPORTING
generate_random_key = ' '
alg = 'SHA1'
keycstr = 'cB1phTHISISATESTVuZMDmWCz1CEMy82iBC3HgFLpE&7857T...YFqV93gRJQ'
client_independent = ' '
EXCEPTIONS
unknown_alg = 1
param_length_error = 2
internal_error = 3
param_missing = 4
malloc_error = 5
abap_caller_error = 6
base64_error = 7
calc_hmac_error = 8
rsec_record_access_denied = 9
rsec_secstore_access_denied = 10
rsec_error = 11
rng_error = 12
record_number_error = 13
OTHERS = 14.
Best regards, Uwe
Edited by: Julius Bussche on Aug 5, 2010 10:19 PM
I truncated the key further because in a coding tag it toasts the formatting when too long.Hi,
yes, we can :-). Let say that SAP implementation supports a key with size more than 81 bytes. Then according to specification if the key is longer than block size of hash function (64 bytes for SHA-1) then it would use hash function to reduce original key to new key with size equals to output size of hash function (20 bytes for SHA-1). Therefore doing this step manually before calling SET_HMAC_KEY is equal to calling SET_HMAC_KEY which supports keys longer than 81 bytes.
The easiest way how to check this is to compare some HMAC-SHA1 implementation with the result produced by my proposed logic.
DATA: text TYPE string,
key_str TYPE string,
hash TYPE hash160x,
key TYPE xstring,
hmac TYPE hash512_base_64.
text = 'Hello'.
key_str = '012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789'.
CALL FUNCTION 'CALCULATE_HASH_FOR_CHAR'
EXPORTING
data = key_str
IMPORTING
hashx = hash.
key = hash.
CALL FUNCTION 'SET_HMAC_KEY'
EXPORTING
generate_random_key = space
alg = 'SHA1'
keyxstr = key
client_independent = space.
CALL FUNCTION 'CALCULATE_HMAC_FOR_CHAR'
EXPORTING
alg = 'SHA1'
data = text
IMPORTING
hmacbase64 = hmac.
WRITE: / hmac.
Javascript version
var hmac = Crypto.HMAC(Crypto.SHA1, "Message", "Secret Passphrase");
var hmacBytes = Crypto.HMAC(Crypto.SHA1, "Message", "Secret Passphrase", { asBytes: true });
var hmacString = Crypto.HMAC(Crypto.SHA1, "Message", "Secret Passphrase", { asString: true });
Both implementations return "qsXNz/wecK4PMob6VG9RyRX6DQI=".
Cheers
Sorry for formatting but it looks like something is broken.
Edited by: Martin Voros on Aug 6, 2010 10:34 PM -
HMAC-SHA1 ???
Hello,
I have to implement a key derivation using HMAC-SHA1.
Does anybody know where I can find a java class for this
algorithm?Thanks, but I cannot find an Implementation for HMAC/SHA1 in
javax.crypto.Mac. I got an NoSuchAlgorithmException for every
constellation of Mac.getInstance() I have tried.
Mac mac = Mac.getInstance("HmacSHA");
mac.update(pkcs5Bytes);
mac.update(salt);
tmp = mac.doFinal();Algorithm HmacSHA not available
at javax.crypto.Mac.getInstance(DashoA12275) -
Ksetup: Enforce use of AES256-CTS-HMAC-SHA1-96 fails
Hi,
Windows 7 Home Premium x64 authenticating to a Kerberos 5 install on Ubuntu 14.04.2. Please note the problems are not with the latter part, several Linux clients use the Kerberos KDC without issue, and an install of "Kerberos For Windows"
with "Network Identity Manager" on the Windows 7 client works fine, but it does not integrate with the rest of the system, so...
I have used Ksetup to set the realm, add a KDC, mapped the local user to the principal, and set the machine password (principal exists in the KDC); no problems. However, the KDC is configured to only accept AES256-CTS-HMAC-SHA1-96.
When I try the following it does not work:
C:\>ksetup /setenctypeattr REALM AES256-CTS-HMAC-SHA1-96
Setting enctypes for domain REALM to:AES256-CTS-HMAC-SHA1-96
Setting enctypes on REALM failed with 0xc0000034
Failed /SetEncTypeAttr : 0xc0000034
C:\>ksetup /addenctypeattr REALM AES256-CTS-HMAC-SHA1-96
Query of attributes on REALM failed with 0xc0000034
Failed /AddEncTypeAttr : 0xc0000034
When I perform a kinit, this is apparent (note that this is getting a response from the KDC, as using an invalid username results in a different error explicitly stating that it is invalid):
C:\>kinit username
Password for username@REALM:
Exception: krb_error 14 KDC has no support for encryption type (14) - CANT_FIND_CLIENT_KEY KDC has no support for encryption type
KrbException: KDC has no support for encryption type (14) - CANT_FIND_CLIENT_KEY
at sun.security.krb5.KrbAsRep.<init>(Unknown Source)
at sun.security.krb5.KrbAsReq.getReply(Unknown Source)
at sun.security.krb5.KrbAsReq.getReply(Unknown Source)
at sun.security.krb5.internal.tools.Kinit.sendASRequest(Unknown Source)
at sun.security.krb5.internal.tools.Kinit.<init>(Unknown Source)
at sun.security.krb5.internal.tools.Kinit.main(Unknown Source)
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(Unknown Source)
at sun.security.krb5.internal.ASRep.init(Unknown Source)
at sun.security.krb5.internal.ASRep.<init>(Unknown Source)
... 6 more
I have already set in the Group Policy settings the value of "Network security: Configure encryption types allowed for Kerberos" to "AES256_HMAC_SHA1" only.
How can I force Windows to use the correct encryption type?
For completeness, output of ksetup below:
C:\>ksetup
default realm = REALM (external)
REALM:
kdc = kdc.server.realm
Realm Flags = 0x0No Realm Flags
Mapping username@REALM to Username.
Regards, Rob.
Edit: Just found some interesting output in the KDC logs. These are the only entries in there for the IP address of the Win7 client.
Apr 04 11:15:23 hostname krb5kdc[1711](info): AS_REQ (4 etypes {18 17 16 23}) 10.x.x.x: CLIENT_NOT_FOUND: KERBEROS-KDC-PROBE@REALM for <unknown server>, Client not found in Kerberos database
Apr 04 11:22:24 hostname krb5kdc[1711](info): AS_REQ (4 etypes {18 17 16 23}) 10.x.x.x: CLIENT_NOT_FOUND: KERBEROS-KDC-PROBE@REALM for <unknown server>, Client not found in Kerberos database
Apr 04 11:34:02 hostname krb5kdc[1711](info): AS_REQ (5 etypes {3 1 23 16 17}) 10.x.x.x: CLIENT_NOT_FOUND: Username@REALM for <unknown server>, Client not found in Kerberos database
Apr 04 11:34:18 hostname krb5kdc[1711](info): AS_REQ (5 etypes {3 1 23 16 17}) 10.x.x.x: CANT_FIND_CLIENT_KEY: username@REALM for krbtgt/REALM@REALM, KDC has no support for encryption type
Apr 04 12:07:13 hostname krb5kdc[1711](info): AS_REQ (4 etypes {18 17 16 23}) 10.x.x.x: CLIENT_NOT_FOUND: KERBEROS-KDC-PROBE@REALM for <unknown server>, Client not found in Kerberos database
Apr 04 12:33:45 hostname krb5kdc[1711](info): AS_REQ (2 etypes {18 3}) 10.x.x.x: ISSUE: authtime 1428147225, etypes {rep=18 tkt=18 ses=18}, username@REALM for krbtgt/REALM@REALM
Apr 04 12:33:45 hostname krb5kdc[1711](info): TGS_REQ (1 etypes {18}) 10.x.x.x: BAD_ENCRYPTION_TYPE: authtime 0, username@REALM for cifs/nas.server.realm@REALM, KDC has no support for encryption type
Apr 04 12:46:17 hostname krb5kdc[1711](info): AS_REQ (5 etypes {3 1 23 16 17}) 10.x.x.x: CANT_FIND_CLIENT_KEY: username@REALM for krbtgt/REALM@REALM, KDC has no support for encryption typeHi,
I'm sorry but this problem do need to be post at Windows Server forum, please access to the link below to post your question at Windows Server Forum:
https://social.technet.microsoft.com/Forums/sharepoint/en-US/home?category=windowsserver
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected] -
HMAC SHA1 Signature for google
Hi,
I need to build a google signature using HMAC SHA1 and find a way to do it on our WAS 6.20 system
You can see the google explaination underhttp://code.google.com/apis/maps/documentation/premier/guide.html#URLSigning.
I Beleive I can use FM SSFC_BASE64_ENCODE and SSFC_BASE64_DECODE to do the base64 parts.
However, I don't know how to do the HMAC SHA1 part.
We don't have FM's like CALCULATE_HMAC_FOR_RAW in our system so I think I'll have to create a new FM but I have no idea how to build this.
Any ideas?Hi James (or anybody else out there),
Did you ever determine an answer or solution to this?...some feed back whether you were successful or not would be interesting...
Note that we too would like to leverage the Google Enterprise / Google Map API for Business but have not (yet) figured out how to deal with SHA1 in ABAP. -
Hmac sha1 signature generation error.
I'm using Flex Builder 4 for accessing cloudstack api and for getting responses. When i execute my flex program, the url is generated with commands, api and the signature. But the Xml is showing an error as below:
<?xml version="1.0" encoding="UTF-8"?>
-<listzonesresponse cloud-stack-version="4.0.1.20130201075054">
<errorcode>401</errorcode><errortext>unable to verify user credentials and/or request signature
</errortext></listzonesresponse>
Is it the problem of encoding signature.? I'm using Hmac Sha1. When i did the process in python i got the output as list of zones.
Can anyone help me to correct the error?
Thanks in advance!Hi James (or anybody else out there),
Did you ever determine an answer or solution to this?...some feed back whether you were successful or not would be interesting...
Note that we too would like to leverage the Google Enterprise / Google Map API for Business but have not (yet) figured out how to deal with SHA1 in ABAP. -
Is LabVIEW best for implementating an array of microphones?
Hello techies,
I am planning to implement an array of microphones for localization of sound. Is LabVIEW the best for this or does anybody know a better one??
I would appreciate it if anybody could help me with this.
thanks in advance !Stream of consciousness alert.
My sense is that LabVIEW is not the BEST.
The best is probably an all-hardware solution. Very difficult to build, harder to debug, harder still to modify.
Introduce software, the next best is probably coded in machine language. Probably harder than all-hardware.
Next best is to use a low-level language such as C with custom routines optimized for your particular problem. Hard, but doable and not user-friendly.
Move to a higher level language such as C++, better UI, slightly worse performance.
Move to LabVIEW/Measurement Studio. Slightly worse overhead, great UI, relatively easy to modify and debug. Probably the only one that works on a reasonable budget/timeline.
99+% of applications would probably not need the performance (at least initially) beyond what LV can deliver. Even if you did, you'd be crazy not to start with the most straightforward.
My personal bias is towards Measurement Studio. I feel that it gives me the ability to get my hands dirty when necessary but maintain a very clean UI. With LV I feel that there is some overhead when interfacing with external code. If I already knew LV would I learn/buy Measurement Studio just for this application, almost certainly not.
Don't let perfection be the enemy of the very, very good. I say go with LV.
End stream of consciousness. -
LabView set theory implementation
Hey NI community,
I am about to slog through building a set theory implementation. Before I begin I was wondering if there is a current implementation already, or if anyone had ideas for a memory-efficient underlying containment object. Thanks much.
NathanI went ahead and implemented a basic set class with GOOP. Base container class is an array of variants, so it's more or less universal. Please comment on usefulness or improvements.
Attachments:
set.zip 153 KB -
Hi,
I have written an applet to sign with ALG_HMAC_SHA1. I'm testing it with the CREF given with the Javacard Kit 2.2.2 but it does not work.
The getInstance just throw the exception.
the Code is :
public OTP_RFC4226() {
try{
hmacSha1 = Signature.getInstance(Signature.ALG_HMAC_SHA1, false);---
If any one have an idea i would be please.
Thx.
CauchI have one good news and one bad news for your guys;-
The Bad news: The Javacard API has ALG_HMAC_SHA1;ALG_HMAC_SHA_256;ALG_HMAC_SHA_384;ALG_HMAC_SHA_512 support on paper, but more less only on paper. Not much cards support them. ( I found one Renesas card which support it, feel free to correct me)
The Good news: almost all the Javacard support ALG_SHA, please read ALG_HMAC_SHA1 algorithm to see how to impletment your own ALG_HMAC_SHA1 base on ALG_SHA. ( It is easy pieces of cake) -
Hello!
I'm quite new to java card and I've encountered a problem.
Whenever I add:
HMACKey hmacKey = (HMACKey) KeyBuilder.buildKey(KeyBuilder.TYPE_HMAC, KeyBuilder.LENGTH_HMAC_SHA_1_BLOCK_64, false);JCOP simulator during upload throws this:
Status: Wrong data
jcshell: Error code: 6a80 (Wrong data)
jcshell: Wrong response APDU: 6A80
Unexpected error; aborting executionWithout that line, everything is ok. What is causing this problem?
Using JCOP v3.2.8 with Java Card 2.2.2 API.
Edit:
Hmm, Is it because JCOP is based on 2.2.1 API?
Cheers, Nikola
Edited by: NikolaDP on Dec 12, 2009 9:11 AMHi,
This is because either the algorithm or key size are not supported by your card/simulator. You could try different key sizes to see what is supported.
Having not used the version of JCOP Tools you are using I cannot say if that key is supported. Have you tried loading your code on to a physical card that supports this key? I have had problems in the past where JCOP Tools did not support AES but the JCOP card did. If this is the case, you may not be able to use the simulator.
Cheers,
Shane -
I would like to ask, if somebody already implemented HMAC algorithm with ABAP.
(http://tools.ietf.org/html/rfc2104#section-3). I need to calculate the HMAC-SHA1 hash code for authentification purposes.
thanks,
martinMy solution of HMAC implementation:
FUNCTION Z_CALCULATE_HMAC .
*"*"Local Interface:
*" IMPORTING
*" REFERENCE(IV_HASH_ALG) TYPE HASHALG
*" REFERENCE(IV_MESSAGE) TYPE XSTRING
*" REFERENCE(IV_KEY) TYPE XSTRING
*" EXPORTING
*" REFERENCE(EV_HASH) TYPE HASH160
* H(K XOR opad, H(K XOR ipad, text))
* B = 64 bytes
DATA: ipad_x TYPE xstring,
opad_x TYPE xstring,
key_x TYPE xstring,
x1 TYPE x,
x2 TYPE x,
x3 TYPE x,
length_key TYPE i,
chars_appended TYPE i,
xor1 TYPE xstring,
xor2 TYPE xstring,
ev_hash_x TYPE hash160x.
* -- index 0. - ipad, opad
* ipad = the byte 0x36 repeated B times
* opad = the byte 0x5C repeated B times.
x1 = '36'.
x2 = '5C'.
x3 = '00'.
DO 64 TIMES.
CONCATENATE ipad_x x1 INTO ipad_x IN BYTE MODE.
CONCATENATE opad_x x2 INTO opad_x IN BYTE MODE.
ENDDO.
* -- index 1. - extend key to 64 bytes
* append zeros to the end of K to create a B byte string
* (e.g., if K is of length 20 bytes and B=64, then K will be appended with 44 zero bytes 0x00)
* KEY is already sended in HEX format
key_x = iv_key.
length_key = XSTRLEN( key_x ).
chars_appended = 64 - length_key.
IF chars_appended > 0 .
DO chars_appended TIMES.
CONCATENATE key_x x3 INTO key_x IN BYTE MODE.
ENDDO.
ENDIF.
* -- index 2. - first calculation = Key XOR ipad
* XOR (bitwise exclusive-OR) the B byte string computed in step (1) with ipad
xor1 = key_x BIT-XOR ipad_x.
* -- index 3.
* append the stream of data 'text' to the B byte string resulting from step (2)
* message is sended already in HEX format
* iv_message_x = iv_message.
CONCATENATE xor1 iv_message INTO xor1 IN BYTE MODE.
* -- index 4.
* apply H to the stream generated in step (3)
CALL FUNCTION 'CALCULATE_HASH_FOR_RAW'
EXPORTING
alg = iv_hash_alg
data = xor1
* length = 20
IMPORTING
hashx = ev_hash_x.
* -- index 5.
* XOR (bitwise exclusive-OR) the B byte string computed in step (1) with opad
xor2 = key_x BIT-XOR opad_x.
* -- index 6.
* append the H result from step (4) to the B byte string resulting from step (5)
* iv_message_x = ev_hash_x.
CONCATENATE xor2 ev_hash_x INTO xor2 IN BYTE MODE.
* -- index 7.
* apply H to the stream generated in step (6) and output the result
CALL FUNCTION 'CALCULATE_HASH_FOR_RAW'
EXPORTING
alg = iv_hash_alg
data = xor2
IMPORTING
hash = ev_hash.
ENDFUNCTION.
Usage:
CALL FUNCTION 'Z_TFM_CALCULATE_HMAC'
EXPORTING
iv_hash_alg = 'SHA1'
iv_message = '4D415254494E' "MARTIN
iv_key = '42524154' "BRAT
IMPORTING
EV_HASH = LV_HASH .
regards,
martin -
SPI implementation in LabVIEW only
Hi Folk,
A couple years ago I used a LabVIEW program that implemented SPI using nothing but a comm or parallel port.
It was from a customer and was custom code that I did not have access to.
Does anyone have an example of a LabVIEW only SPI solution.
Thanks,
-SSHi Simon,
What would you be connecting to the Serial/Parallel port? In the following article: Understanding the SPI Bus with NI LabVIEW it talks about different options for communicating with SPI in LabVIEW.
The following link has also been helpful for me when generating SPI Digital Waveforms SPI Digital Waveform Reference Library
Do you remember needing any specific drivers to run that previous code?
Scott A
SSP Product Manager
National Instruments -
Hi,
I am trying to port a Matlab program into a MathScript script. Following is the first few lines of the script:
clear;
image1=imread('C:\LV_VertAlign\DSC_0104.jpg','jpg');
image_double1=im2double(image1);
image_gray11=rgb2gray(image_double1);
In the first line, imread is supposed to read in a photo taken from a DSLR camera, so I think it's a 32-bit image. And this is the error I got when trying to run this script in the Labview 8.5.1's Mathscript windows:
"Error in function imread at line 2. A problem occurred in a subVI call."
What is the problem here? and why does it complain about a subVI call?
Thanks for any help to point me in the right direction.
-AnhHello Anh,
As you may know, LabVIEW MathScript is implemented on top of LabVIEW. The error message you received indicates that a problem occurred in one of the LabVIEW subVIs that MathScript calls. As Jim indicated, the problem is in the file type specifier you passed to the function. MathScript requires the use of 'JPEG' and not 'jpg' or even 'jpeg.' In this case, we could return a better error message. I have filed a bug report (115804) for this issue.
You will find that once you fix this, the im2double and rgb2gray functions are not supported. In LabVIEW MathScript, you generally can execute scripts written in the MATLAB® language syntax. However, the MathScript engine executes the scripts, and the MathScript engine does not support some functions that the MATLAB software supports.
We will look into adding these functions in a future release of LabVIEW MathScript. If you need this functionality now, these functions are very simple to write yourself. If you are performing additional image analysis in your script, you may wish to purchase the IMAQ toolkit. It doesn't add any functions to MathScript at this time, but you could continue your analysis outside of MathScript with LabVIEW VIs.
MATLAB® is a registered trademark of The MathWorks, Inc.
Grant M.
Staff Software Engineer | LabVIEW Math & Signal Processing | National Instruments -
How user define matlab function will work in labview
Hi all
iam trying to implenet the matlab function which i have wrote in Matlab 7.9(R2009b)
function [ op ] = myimplement( ip )
the file is attached below
i want to use matlab function in labview ,i have tried Matlab script node but few of function define in matlab function are not working properly
so is any alternative way to implement matlab function in labview environment to implement my algorithm & getting same results
thnaks
waiting for kind reply
iam trying from many days but not geting any results
Attachments:
myimplement.zip 2 KBHi smercurio_fc nd sry if u mind my double post actually iamin last month of my post graduate research so i want to complete it urgently
iam attaching detail file what iam facing?/
same thing iam doing in Matlab & getting result but while doing in Labview by using Matlab script node not getting
error 1048 is occured
iam attaching word file kindly help me out regarding that problem
i will be thankfull for u
tc
Attachments:
my matlab function.docx 134 KB
my matlab function.doc 154 KB -
Hi,
I would like to build an application with the FTDI FT2232 Chip which should communicate over SPI with an external device.
I am using Labview and have implemented predefined VIs from the FTDI Webpage.
I
think, i have configured all important parameters, but my problem is,
that I can't see a clock signal on the scope at the CLK-Pin during a
read/write operation.
I am not sure, if I have done all
the configuration which are necessary for SPI communication, perhaps I
have forgotten somethink or have made a mistake in the
configuration and somebody has an idea what went wrong:
1)
Set the chip into MPSSE mode to activate SPI functionality (Clock, Data
Out & CS are defined as OUTPUT, Data In is defined as INPUT)
2) SPI Open
3) SPI Set Clock
4) SPI Init
5) SPI Write
I can execute all these Subfunctions successfully, but the clock doesn't work!
In
the SPI_Write subfunction I can change the Idle Level of the CLOCK,
DATA OUT and CS Pins. When I play with these parameters, and change
them from high to low for example,
I can see these levels on the scope.
Does anybody know, if there is something missing in the configuration?
Regards,
Markus@NN70,@Wetzer The only way this program may run for a longer period is to use the continuous run button. In labview that is a BIG NO-NO.
And also NN70 you use a frame as a tool to "clean" up a diagram so it do not get so big. In fact the frame is not needed. And by doing some cleaning up. The diagram could easily be fitted in one screen. And we also have the concept of using sub VIs. So all in all NN70. You are in the process of developing some very very bad programming habits. That will cause you severe problems if you do not do anything about it. I am sure Wetzer can point you to some good free Labview lessons.
If you do not know it NN70. Labview is shipped with a lot of good examples. Go to help in the toolbar. Select help and find examples. Then search for DAQmx. This will give you a long list of examples. Then locate the "Cont Acq&Chart Samples-Int Clk.vi" and take a look at it. This program is nice and compact. And it do much the same as your program do. Note how it is done. The setup and closing. Are done only ONCE. Then the part that is repeated is placed in a loop. This is the correct structure of any program made in Labview. The continuous run button is a tool ONLY for debugging.
I suggest you use the structure in the the example I pointed you to. Then you also clean up your program and get rid of the frames. Post it here for comments if you feel for it. If not your problem is gone (which I am quite sure will be the case). We will at least have something sane we can work on.
Besides which, my opinion is that Express VIs Carthage must be destroyed deleted
(Sorry no Labview "brag list" so far)
Maybe you are looking for
-
My both of iphone id blocked. What do I have to do?
-
while updating my ipad to new software through itunes it got stuck and does not work anymore - it just displays the screen with symbols of itunes and the cable to connect to it - help - what should i do?
-
So it appears to me that the note's sync feature in outlook is yet another feature that does not "Just Work". I can create a note on my device (iphone) and it shows up in my iCloud folder in the notes folder. Looks like a email to me (outlook integr
-
"Up Next" issues in Applescript.
I have an applescript to play a "playlist" The script hangs waiting for user input if there is anything on my "up next" list. How can I clear the "up next" list in an applescript??
-
Is July 9 internet issue a problem for Mac Pro users?
Is the July 9 internet issue a problem for Mac Pro users or has Apple taken care of this issue in updates?