ALG_RSA_SHA_PKCS1  with  cyberflex 64k

Hello,
I've tested with success ALG_RSA_PKCS1 , on gemalto cyberflex 32K and even 64k
here is my crypt fonction :
private void encryptRSA(APDU apdu)
          byte a[] = apdu.getBuffer();
          short byteRead = (short) (apdu.setIncomingAndReceive());
          cipherRSA.init(rsa_PrivateCrtKey, Cipher.MODE_ENCRYPT);
          short cyphertext = cipherRSA.doFinal(a, (short) dataOffset, byteRead, a, (short) dataOffset);
          // Send results
          apdu.setOutgoing();
          apdu.setOutgoingLength((short) cyphertext);
          apdu.sendBytesLong(a, (short) dataOffset, (short) cyphertext);
     }but when i couple it with sha1 , i've got a bad error with this code quite similar :
private void signDocs(APDU apdu) {
          byte apduBuffer[] = apdu.getBuffer();              
         short byteRead = (short) (apdu.setIncomingAndReceive());       
     // create signature
         sig.init(rsa_PrivateCrtKey, Signature.MODE_SIGN);
     byte[] sigResult = null;
         short size = sig.sign(apdluBuffer, (short) dataOffset, byteRead, sigResult, (short) dataOffset);
         apdu.setOutgoing();
         apdu.setOutgoingLength((short) size);
         apdu.sendBytesLong(sigResult, (short) dataOffset, (short) size);
i followed everything like in the doc 2.2.1
the two corresponding lines at the beginning are :
public Signature sig=null;
sig = Signature.getInstance(Signature.ALG_RSA_SHA_PKCS1,false);
the compilation and the load of the cardlet is good but when i want to sign :-----------> error 6F00
I'm surprised because RSA without SHA1 works very well , i arrive to export the public key and use it on the pc....
and according to the spec of my 64k card it is compatible with sha1 :
[http://scardshop.com/boutique/fiche_produit.cfm?ref=CPCB64P&type=2&code_lg=lg_fr&num=11 ]
kind regards,
Marc

Hello,
Some news,
when i use the public key in the card like this , it works:
private void signDocs(APDU apdu) {
          byte apduBuffer[] = apdu.getBuffer();              
         short byteRead = (short) (apdu.setIncomingAndReceive());       
     // create signature
         sig.init(rsa_PublicKey, Signature.MODE_SIGN);
     byte[]  sigResult =new byte[(short)128];
         short size = sig.sign(apduBuffer, (short) 0, byteRead, apduBuffer, (short) 0);
         apdu.setOutgoing();
         apdu.setOutgoingLength((short) size);
         apdu.sendBytesLong(apduBuffer, (short) 0, (short) size);
     }when i say it works, it returns a 128 byte array an i haven't verify the signature yet
but that wants to say i will have to export the private key if i wants to decrypt it on the pc
and i will never have to export my public key.... ( wich will be considerated like a private...)
will it work ?
i've found another solution :
http://forums.sun.com/thread.jspa?forumID=23&threadID=5240359
but i would prefer my solution
help me please, i'm very limited in time,
(ps : sorry for my bad english)
kind regards,
Marc

Similar Messages

  • Intermedia Image transform problem with Tifs 64k

    Hello, I have loaded a bunch of Tifs ranging in size from about 8k up to 120 or so kilobytes. When I try to do a scale transform on them and change them into GIFs using the image.process('scale="0.2"') command for example only the TIf images that are about 64k and under get processed; for the rest I get an unhandled internal exception. I am running Oracle 8.1.5 Enterprise Edition on a Sun Enterprise 250 with .5 gig RAM. Should I reconfigure how much ram is available to Intermedia and if so how do you do that?

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by John Wagner ([email protected]):
    Hello, I have loaded a bunch of Tifs ranging in size from about 8k up to 120 or so kilobytes. When I try to do a scale transform on them and change them into GIFs using the image.process('scale="0.2"') command for example only the TIf images that are about 64k and under get processed; for the rest I get an unhandled internal exception. I am running Oracle 8.1.5 Enterprise Edition on a Sun Enterprise 250 with .5 gig RAM. Should I reconfigure how much ram is available to Intermedia and if so how do you do that?<HR></BLOCKQUOTE>
    I directly got in touch with John, and his problem has been sold.
    He sent two Tiff images - a bad one and a good one. An image expert took a look at them. Here is his comment:
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>
    I took a look at these images and they seem
    to be "good" - i.e. they are not corrupted or
    in an unsupported format. I tried our new
    version and it handles them OK if a little
    slowly. I don't actually have an 8.1.x
    database up right now and it will take some
    time. But I think I see what the problem might
    be. I made a change to the image.
    What I did was use a utility to make the
    image "striped". The images were contained
    in one large "chunk" of pixels. TIFF images
    can also be striped, where the pixels are in
    much smaller chunks which means less
    memory is required to work with them. In the
    case of the 8.1.5 image code, there might be
    some spot where it can't handle this kind of
    TIFF image larger than 64k. By causing the
    images to be written with small chunks of
    data you can avoid this. The TIFF manual
    suggests that the chunks be about 8k; for
    this image that translates into about 500
    scanlines per chunk.
    <HR></BLOCKQUOTE>
    John tested this "striped" tiff image and
    successfully uploaded into the database.
    null

  • Question on protecting Windows applications with CyberFlex

    The problem I'm trying to find solution for is the following:
    I have application, that contains very important algorithms. I want to sell this application, but I need that nobody knows the algo, my applications is using. I decided to move such algos inside CyberFlex SmartCard. To do this - is very easy. In fact, I already did that (I wrote some applets and put them inside. I can call them, get data etc.).
    But the next problem. These algos are updated from time to time. And I need to update them in each card (call it protection dongle) I sold. But nobody should get their source, because these algos are my secret.
    So, now I'm trying to find the way of updating applets inside CyberFlex card in secure way (I can't require users to send me their cards to update...)
    Secured channels looked suitable for me because of the next reason.
    When I programm card, I already know all the keys for it. I can put the keys to the applet inside the card. When user needs upgrade, he runs my software. Software calls special applet inside the card, which adds random data to a key, encrypts it and returns to application in encrypted form.
    User sends me this encrypted key. I decrypt it, remove random data from it, use it to encrypt secured channel data commands bytes (which should be used on that card) to load applet to a card and send back to user, He runs my software and it executes the code for secured channel, my user got from me.
    Is this idea working at all?
    My problem is that I still can't imagine the work of secured channel other that just encrypting the communication between my software and smartcard itself. As I understand, no random session key is used. Only AUTH key is needed for encryption.
    Thank you for your help in advance.

    First, you don't update an applet. It must be removed and downloaded again.
    Second, you are re-inventing the wheel. What you are describing IS the purpose of the Global Platform secure channel. You are protecting the domain of the card with a mutual auth. From that auth, a secure channel is opened so you can manage the domain.
    What you will run into is a bunch of issues, like key management, uncentralized application, getting keys to the card securely, card personalization...BUT, I'm not saying it can't be done!
    I hate to be the bearer of bad news, but there are quite a few card management systems out there that do this. See ActivCard, Intercede, BellID, DeXaBadge, Alacris, Datakey, etc.

  • Replication with in memory DB: client synchronization

    Hi,
    I'm using replication framework with two completely in-memory databases. The first one is launched as master without knowledge of its replica db ("dbenv->repmgr_add_remote_site" and "dbenv->repmgr_set_nsites" are not called), some data is inserted into it and subsequently replica process is launched as client (in this case "repmgr_add_remote_site" and "repmgr_set_nsites" are called with master coordinates). I expected client to be synchronized by master with previously inserted records, but this doesn't seem to happen. Futhermore, although client opens db successfully, when db->get is called on the client the following error is returned:
    "DB->get: method not permitted before handle's open method".
    These are the first messages printed by master when client process is started:
    MASTER: accepted a new connection
    MASTER: got handshake 10.100.20.106:5066, pri 1
    MASTER: handshake introduces unknown site
    MASTER: EID 0 is assigned for site 10.100.20.106:5066
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 0 eid 0, type newclien
    t, LSN [0][0] nogroup
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newsite, L
    SN [0][0] nobuf
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newmaster,
    LSN [1][134829] nobuf
    MASTER: NEWSITE info from site 10.100.20.106:5066 was already known
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 0 eid 0, type master_r
    eq, LSN [0][0] nogroup
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid -1, type newmaster,
    LSN [1][134829] nobuf
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type update_r
    eq, LSN [0][0]
    MASTER: Walk_dir: Getting info for dir: ./env
    MASTER: Walk_dir: Dir ./env has 2 files
    MASTER: Walk_dir: File 0 name: __db.rep.gen
    MASTER: Walk_dir: File 1 name: __db.rep.egen
    MASTER: Walk_dir: Getting info for in-memory named files
    MASTER: Walk_dir: Dir INMEM has 1 files
    MASTER: Walk_dir: File 0 name: RgeoDB
    MASTER: Walk_dir: File 0 (of 1) RgeoDB at 0x41ee2018: pgsize 65536, max_pgno 1
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type update, LSN
    [1][134829] nobuf
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page_req
    , LSN [0][0]
    MASTER: page_req: file 0 page 0 to 1
    MASTER: page_req: found 0 in dbreg
    MASTER: sendpages: file 0 page 0 to 1
    MASTER: sendpages: 0, page lsn [1][218]
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LSN [
    1][134829] nobuf resend
    MASTER: wrote only 13032 bytes to site 10.100.20.106:5066
    MASTER: sendpages: 0, lsn [1][134829]
    MASTER: sendpages: 1, page lsn [1][134585]
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LSN [
    1][134829] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: sendpages: 1, lsn [1][134829]
    MASTER: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log_req,
    LSN [1][28]
    MASTER: [1][28]: LOG_REQ max lsn: [1][134829]
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][28] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131549] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131633] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131797] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131877] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][131961] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132125] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132205] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: queue limit exceeded
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132289] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: queue limit exceeded
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132453] nobuf resend
    MASTER: msg to site 10.100.20.106:5066 to be queued
    MASTER: queue limit exceeded
    MASTER: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN [1
    ][132533] nobuf resend
    And these are the corresponding messages printed by client process after startup:
    REP_UNDEF: rep_start: Found old version log 13
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type newclient,
    LSN [0][0] nogroup nobuf
    Slave becomes slave
    Replication service started
    CLIENT: starting election thread
    CLIENT: elect thread to do: 0
    CLIENT: repmgr elect: opcode 0, finished 0, master -2
    CLIENT: init connection to site 10.100.20.105:5066 with result 115
    CLIENT: got handshake 10.100.20.105:5066, pri 1
    CLIENT: handshake from connection to 10.100.20.105:5066
    CLIENT: handshake with no known master to wake election thread
    CLIENT: reusing existing elect thread
    CLIENT: repmgr elect: opcode 3, finished 0, master -2
    CLIENT: elect thread to do: 3
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type newclient,
    LSN [0][0] nogroup nobuf
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newsite,
    LSN [0][0]
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 0 eid -1, type master_req
    , LSN [0][0] nogroup nobuf
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newmaste
    r, LSN [1][134829]
    CLIENT: repmgr elect: opcode 0, finished 0, master -2
    CLIENT: Election done; egen 6
    CLIENT: Updating gen from 0 to 5 from master 0
    CLIENT: Egen: 6. RepVersion 4
    CLIENT: No commit or ckp found. Truncate log.
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type update_req,
    LSN [0][0] nobuf
    New Master elected
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type newmaste
    r, LSN [1][134829]
    CLIENT: Election done; egen 6
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type update,
    LSN [1][134829]
    CLIENT: Update setup for 1 files.
    CLIENT: Update setup: First LSN [1][28].
    CLIENT: Update setup: Last LSN [1][134829]
    CLIENT: Walk_dir: Getting info for dir: ./env
    CLIENT: Walk_dir: Dir ./env has 5 files
    CLIENT: Walk_dir: File 0 name: __db.rep.gen
    CLIENT: Walk_dir: File 1 name: __db.rep.egen
    CLIENT: Walk_dir: File 2 name: __db.rep.init
    CLIENT: Walk_dir: File 3 name: __db.rep.db
    CLIENT: Walk_dir: File 4 name: __db.reppg.db
    CLIENT: Walk_dir: Getting info for in-memory named files
    CLIENT: Walk_dir: Dir INMEM has 0 files
    CLIENT: Next file 0: pgsize 65536, maxpg 1
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type page_req, L
    SN [0][0] any nobuf
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LS
    N [1][134829] resend
    CLIENT: PAGE: Received page 0 from file 0
    CLIENT: PAGE: Write page 0 into mpool
    CLIENT: PAGE_GAP: pgno 0, max_pg 1 ready 0, waiting 0 max_wait 0
    CLIENT: FILEDONE: have 1 pages. Need 2.
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type page, LS
    N [1][134829] resend
    CLIENT: PAGE: Received page 1 from file 0
    CLIENT: PAGE: Write page 1 into mpool
    CLIENT: PAGE_GAP: pgno 1, max_pg 1 ready 1, waiting 0 max_wait 0
    CLIENT: FILEDONE: have 2 pages. Need 2.
    CLIENT: NEXTFILE: have 1 files. RECOVER_LOG now
    CLIENT: NEXTFILE: LOG_REQ from LSN [1][28] to [1][134829]
    CLIENT: ./env rep_send_message: msgv = 4 logv 13 gen = 5 eid 0, type log_req, LS
    N [1][28] any nobuf
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][28] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][64] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][147] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][218] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][65802] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][131386] resend
    CLIENT: rep_apply: Set apply_th 1
    CLIENT: rep_apply: Decrement apply_th 0
    CLIENT: ./env rep_process_message: msgv = 4 logv 13 gen = 5 eid 0, type log, LSN
    [1][131469] resend
    It seems like there are repeated messages from master, but I'm not able to understand what's wrong.
    Thanks for any kind of help
    Marco

    The client requests copies of the database pages from the master by sending the PAGE_REQ message. The master responds by sending a message for each page (i.e., many PAGE messages). The master tries to send PAGE messages as fast as it can, subject only to the throttling configured by rep_set_limit (default 10Meg).
    With 64K page size, the master's local TCP buffer fills up immediately, and repmgr only stores a backlog of 10 additional messages before starting to drop messages. The replication protocol is designed to tolerate missing messages: if you were to let this run, and continue to commit new update transactions at the master at a modest rate, I would expect this to complete eventually.
    However, repmgr could clearly do better at managing the traffic to avoid this situation, at least in cases where the client is accepting input at a reasonable rate. I am currently working on a fix/enhancement to repmgr which should accomplish this. (This same problem was reported by another user a few days ago.)
    In the meantime, you may be able to work around this problem by setting a low throttling limit. With your 64K page size, I would try something in the 320,000 to 640,000 range.
    Alan Bram
    Oracle

  • Repository with software worked on v4 firmware

    Here is my repo with software, worked on official firmware v4:Connect via SSH and run this command:echo "deb http://anionix.ddns.net wheezy-64k main" > /etc/apt/sources.listThen - you can run apt-get update and install software.Also available "SID" repo!If you need Chroot: Download chroot installer. WARNING: This software only for official v4 firmware (Kernel/Software with PageSize=64k)!I dont change any source code. But if you dont trust me - just dont use it  Current software list (Browse full list of packages)Transmission-daemon (+2.84 from SID)MC (Midnight Commander)MiniDLNA (+1.1.2 from SID)Openssh-server + clientSamba v3.6.6rSync (+ SID)pyLoad (Download manager)Aria2PythonPerlCorosyncApache2PHP5 (Curl, GD, MCrypt)MySQL 5.5 Server & ClientffmpegPacemakerHTopLocalesBuild-essential && Patched binutilsBase system & all base tools (For make chroot or build system from scratch)And more (See full list at end of this post)If you want another software - just ask me here.But dont ask me add software not included in official debian repository! 

    Hi again, I was trying your repository, but I cannot install transmission-daemon.This is what I'm doing: WDMyCloud:~# cp /etc/apt/sources.list /etc/apt/sources.list.bak
    WDMyCloud:~# echo deb http://anionix.ddns.net wheezy-64k main > /etc/apt/sources.list
    WDMyCloud:~# apt-get update
    Ign http://anionix.ddns.net wheezy-64k Release.gpg
    Ign http://anionix.ddns.net wheezy-64k Release
    Get:1 http://anionix.ddns.net wheezy-64k/main armhf Packages [10.4 kB]
    Ign http://anionix.ddns.net wheezy-64k/main Translation-en
    Fetched 10.4 kB in 3s (2607 B/s)
    Reading package lists... Done
    WDMyCloud:~# apt-get install transmission-daemon
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:
    The following packages have unmet dependencies:
    transmission-daemon : Depends: libcurl3-gnutls (>= 7.16.2) but it is not installable
    Depends: libnatpmp1 but it is not installable
    Recommends: transmission-cli (>= 1.50-1) but it is not going to be installed
    E: Unable to correct problems, you have held broken packages. Can you help me? What am I doing wrong?Thanks in advance  

  • Howto generata a openplatform.exp within JCOP Tools?

    Hi Guys,
    I am trying to compile and load the coolkey applet (http://directory.fedoraproject.org/wiki/BuildCoolKeyApplet) into a Cyberflex 64k card.
    The cap files need to be transformed with a byte code verifier before the card can load them.
    The byte code verfifier needs all .exp files from all incuded libraries. Thus I need the openplatform.exp file.
    I tried the one from http://www.globalplatform.org/specifications/archived/card-tech-201.zip but i am ending up with an error on install_for_install.
    I think maybee the JCOP tools have a slightly different version and I would like to have an openplatform.ext out of my build environment which is used for the applet.
    Can anybody tell me how I can generate such an exp file from the global platorm stuff included in the JCOP tools?
    Regards,
    Fabian

    Hi,
    I am using the one from http://www.trusted-logic.com/down.php
    If I call without arguments it looks like this:
    java -jar captransf.jar  -s -noint  coolkey.cap
    Cannot find export file for imported package a0:0:0:0:62:0:1
    Please provide the correct export file on the command lineTo get the muscle applet working (which does not use openplatform) I've used this line with success:
    java -jar captransf.jar -s -noint "api21\javacard\framework\javacard\framework.exp" "api21\javacard\security\javacard\security.exp"  "api21\java\lang\javacard\lang.exp"  "api21\javacardx\crypto\javacard\crypto.exp" musclecard.exp  musclecard.cap???

  • Java Card RSA public key problem

    Hello,
         I have a problem when I try to set the modulus part of a RSA public Key. There is always an error 6F 00.
    The code is very simple :
    private void saisie_modulus(APDU apdu)
              byte apduBuffer[] = apdu.getBuffer();
              if (!pin.isValidated ())
                   ISOException.throwIt(ISO7816.SW_SECURITY_STATUS_NOT_SATISFIED);
              byte byteRead = (byte)(apdu.setIncomingAndReceive());
              rsa_PublicKey2.setModulus(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead);
              return;
    What is really strange is that when I replace
    rsa_PublicKey2.setModulus(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead);
    with rsa_PublicKey.setModulus(rsaPublicKey, (short)6, (short)128); that works. (rsaPublicKey is an array containing the key and it's taken from the sample CryptoTest that comes with Axalto software)
    I tried to enter on the apdu the exact same modulus value of the public key of the sample CryptoTestand it doesn't work
    I tried another examples that I tested elseware and it doesn't work either
    the rest setKey composents work (same functions but with
    rsa_PublicKey2.setExponent(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead);
    rsa_PublicKey2.setP(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead);
    rsa_PublicKey2.setQ(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead);
    rsa_PublicKey2.setPQ(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead);
    rsa_PublicKey2.setDP1(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead);
    rsa_PublicKey2.setDQ1(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead);
    in the place of
    rsa_PublicKey2.setModulus(apduBuffer, (short)ISO7816.OFFSET_CDATA, (short)byteRead); )
    Do you have any ideas? Is this possible to have a bug in the card? I use a Cyberflex 64k V2
    Thank you in advance

    Yes, I did enter the key by hand (as I did with the Exponent, P, Q , PQ, DP1 et DQ1 and it worked).
    And I did enter by hand the EXACT same key (128 bytes) of the array rsaPublicKey to test.
    I repeat that when I type
    rsa_PublicKey.setModulus(rsaPublicKey, (short)6, (short)128);
    that works
    but when I enter manually (by hand) the EXACT same key (128 bytes beginning from byte 6) it doesn't work
    I even generated a key in the card and I exported its modulus (128 bytes). I entered by hand the exact same modulus (copy - paste to be precise) that was exported and the card did not accept it.
    This problem only happen with the modulus I repeat
    Any ideas?
    I begin to suspect a technical problem. I don't

  • Database performance degradation issue

    Hi,
    We are having the database performance related problem.
    Oracle database 8.1.7.0
    when we use statement,
    SQL> select name,value from v$sysstat where name ='redo buffer allocation retries';
    NAME VALUE
    redo buffer allocation retries 2540
    Here, Redo retries value shown above is too big, which it should not be.
    Currently we are having log_buffer = 65536 bytes (64 kb)
    Is it necessary to increase the size of log_buffer ? does increasing the size of log_buffer will improve the database performance issue upto some extent ?
    Also, regarding database buffer cache,
    SQL> SELECT NAME, VALUE FROM V$SYSSTAT WHERE NAME IN ('db block gets', 'consistent gets', 'physical reads');
    NAME VALUE
    db block gets 4365099
    consistent gets 1309280457
    physical reads 103708616
    From the above values, buffer cache hit ratio is 0.921052817
    So, is it necessary to increase the size of database buffer cache ?
    With Regards

    Log_buffer 64k is likely too small. The default is 512k per CPU.
    Increasing log buffer will decrease the number of redo allocation retries.
    You need to set to 512K or 1M.
    Buffer Cache Hit Ratio is a Meaningless Indicator of the Performance of the System, as Connor McDonald has demonstrated on http://www.oracledba.co.uk
    You'd better strive to reduce I/O.
    Also you will notice you need very big amounts of memory to get very little improvement.
    Personally I would probably do something if BCHR was below 80 percent, but I know of situations where the problem is in the application and no value of db_blockf_buffers will be big enough.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

  • Bandwidth throughput

    I have question about bandwidth throughput. Here's what I have:
    Private link between Las Vegas, NV and Waterloo Canada
    100Mb
    69ms latency
    Max. Theoretical throughput would be 7Mbps
    If I copy a file on windows 7 computers to/from, should it be achieving the 7Mbps over the link; in perfect conditions? If it is less than that, and no other network problems are present, should I be contacting the provider to have them look at the link?
    For Ex.
    254MB file took 9m45s to copy
    254MB/9.75 = MB per minute
    26 MB per minute
    *8 (to change to bits)
    208000000 Mbpm
    /60 (60s in a minute)
    3.47Mps (about half of the theoretical max)
    If I am the only one using that link, should I be getting 7Mbps every single time?
    I also did some iperf tests with a 64K window. It almost matches the theoretical limit.
    iperf results:
    [156]  0.0-600.1 sec   591 MBytes  8.26 Mbits/sec

    Hi,
    What's bad in contacting the provider to have them look at the link?
    Regards
    Thanveer
    "Everybody is genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is a stupid."

  • Documentation user guide manual

    Does LR 1.0 have a full user guide/manual, either in pdf or html that I can download, read, and print as desired?
    One of the fellows said it only came with a guide?
    I did see the hi res Getting Started Guide, but am looking for the full help.
    Michael

    thanks for -all- the responses. That help.pdf is what I am looking for. I will be trying the trial and reading the entire manual, before I buy cuz I need to know what they did include, how they changed it, and what they did not include. I was in the beta for about 5 mos, but need to see 1.0 fer real.
    I have already done all the video tutorials, and they did look really good.
    The trial will help me see what kinda speed I can get with my 64K* ram, 110GB HD, .25 GHZ machine*
    The beta 4.1 was pretty slow in some areas, for me.
    * realfeel

  • What may cause Errors.log to be generated from Essentials configuration wizard?

    I customized install.wim to handle specific drivers, and while the OS installs properly... during the Essentials Configuration Wizard, the process nearly immediately stops at 0% and generates this file "Errors.log" with the contents:
    FATAL: SetFolderPermission:
    Is it just a corrupted install.wim, that just so happens to prevent the wizard from working... or did I accidently change permissions on something that would lead to this type of error?

    Well you finally lost me :) Any data partitions are pretty much irrelevant until after Essentials is done installing
    Essentials does not get it "installed", Windows gets Installed, and then Essentials "Configures".  As I said part of this configure is to decide where the largest data partition is.  The team does not do tons of what if's in
    the Dev cycle. I have no clue what that code/logic actually is, but I am certain it went no deeper than a normal standard Server install.  They did not test different allocation units, smaller than standard "hidden partitions with drive letters.
    This said, I was able to complete the Essentials configuration using my unattend.xml with 100, 350, and 328.  I used diskpart to assign letters to the 100 and 350, and formatted a 70 gb "data" partition with the 64k and gave it B: But I am
    confused as after Essentials config finished the 100 and 350 did not have drive letters?  Also when I assign the letters it was not reflected in diskmgmt.msc.  Right click still only showed Help.  But File Explorer did.
    So my ask would be that you edit your winpe to,  100, 350, and 328, and format the system drive with 4k and see if that works for you. Then we/you could change things one at a time to see what breaks it.
    All this said, even if we can figure out what breaks it and I file a bug they may acknowledge it is a bug but will mark it as will not fix in this build, and the next build.  If we can explain to the Dev team why it should work this way, that this is
    the future of Server installs they may look at changing it in V.next.next t be shipped, who knows when. 3 years from now?
    Grey

  • SCP02 Put key problem

    Hi,
    I have next troubles, command put key fail, with 6982 code.
    Secure channel mode - 3.
    I don't have any problem with creating secure channel, it creates correctly, I think problem is with encrypting command or maybe with data.
    In SCP01 I do not any operation with LastMac, but as I see in GlobalPlatform Library source, when we use SCP02, we must encrypt Last Mac with DEK Session key, it's correct ? Then I generate new mac for put key command, using "new" LastMac value as ICV. Next I encrypt command with AuthEnc Session key and append mac to result command.
    In Put key I set algo as 0x81, len - 0x10, and CheckValue len - 0x3. This procedure work fine with CyberFlex card.
    PS. JCOP Support don't send me any responce. Nice support ...
    PSS. Maybe somebody explain me how to work with this SCP02 or have SCP02 implementation example.
    -Regards.

    ZuZu wrote:
    Hi,
    I have next troubles, command put key fail, with 6982 code.The key encryption is not correct.
    Secure channel mode - 3.- 3?
    I don't have any problem with creating secure channel, it creates correctly, I think problem is with encrypting command or maybe with data.
    In SCP01 I do not any operation with LastMac, but as I see in GlobalPlatform Library source, when we use SCP02, we must encrypt Last Mac with DEK Session key, it's correct ? Then I generate new mac for put key command, using "new" LastMac value as ICV. Next I encrypt command with AuthEnc Session key and append mac to result command.Normally a card works only with one SCP. So make sure your card really supports SCP02. With SCP02 you encrypt the key values in the PUT KEY command via the DEK session key, and in SCP01 with a static DEK key. Furthermore the session key generation is different. TO get an idea you can check out the open source project GPShell.
    In Put key I set algo as 0x81, len - 0x10, and CheckValue len - 0x3. This procedure work fine with CyberFlex card.Gemalto cards have their own mechanism for SCP. If you search this forum you will find enough hints. JCOP does it strictly according to GP spec.
    PS. JCOP Support don't send me any responce. Nice support ...JCOP support is now restricted to "promising" customers, in other words customers which order large volumes of NXP chips.
    PSS. Maybe somebody explain me how to work with this SCP02 or have SCP02 implementation example.
    -Regards.

  • RE: Drive doesn't boot after 10.5.8 upgrade

    After upgrading to 10.5.8, it was a struggle to shut down my machine, especially after typing in Firefox caused it to lock up. When I decided it would be best to restart the machine and start fresh, the screen stayed gray, without the apple on bootup. After a few reboots, I had the computer boot to one of my FireWire drives. Initially, the internal ATA drive didn't show up at all in either DiskUtility or DiskWarrior. After a few hours, the drive began to show up in DiskUtility, with the brand name of the drive listed, but no down arrow with the name I gave the drive. It also doesn't allow me to select any of the buttons for verify/repair disk permissions or verify/repair disk, as they are greyed out. There is no change with DiskWarrior, as it doesn't show me the drive in the pull down menu. And in TechTool Pro, it shows the drive off to the side as disk4s, but doesn't allow me to mount the drive.
    Any solutions to get my drive back up and running, and to archive files/programs onto a backup drive?

    Update:
    After coming to the realization that no software was going to get me up and running with the drive, after testing the Seagate I drive on another G4, and getting similar, if not worse results on that one (as I couldn't even get it to be listed in the DiskUtility window that I had previously, even though I couldn't verify/repair disk on my computer either) — I took the mindset that by partitioning the drive — if it still was a kosher drive — that I could then take it to a data recovery place as a last resort.
    I had planned on taking the drive for data recovery to a place near my home, but of course they closed up shop after more than a decade in the community just last month. Being antsy and wanting some resolution, upon reading that by partitioning a drive, it would not zero out my data, made it sound that it should be an easy recovery, and at least start the process.
    Then again, I was wondering if there was some software that I could get to be able to do that myself, or some method so I don't have to fork over $$$$ if it's an easy thing to do myself? After reading up on partitioning, it indicated that I wasn't deleting anything, and that it was in essence re-jigging my drive. So I took my 75GB drive, and split it 15GB/60GB. In an blink I had two drives that I could access once again. I ran the verify/repair options, with no errors coming up. From there, I then began to reinstall OSX 10.5 on one partition, but stopped immediately, knowing that having the OS wasn't necessary at this point, as I still have an outboard FireWire drive to boot to.
    I then rebooted the machine to that FireWire drive, and to my sadness, while the two partitions appeared on the desktop, one was a completely empty partition (60GB) and one with just 64K (15GB).
    I knew it wouldn't be that easy.
    Having already scoured Disk Utility, DiskWarrior and found nothing to recover the data, and used the Data Recovery "tool" in TechTool Pro 5, with it only bringing up about 30 System files at the most based on a single search for one letter in the alphabet, I'm all ears at this point at what I can do possibly on my end.
    At least I know that the drive is in working order, despite the few people I talked to who tried to tell me that the drive just suddenly breathed it's last breath, and would never work again. The 10.5.8 update has something to do with why the drive fried, and there doesn't seem to be anything physically wrong with the drive.
    Thanks in advance.

  • Change bytes per sector windows server 2012

    Hi,
    We have installed windows server 2012 Datacenter Edition on
    Cisco UCS C24 M3 server. We need to install this with 4K bytes per sector and 4K bytes ber physical sector. When we specify the unit size while formatting it only changes the
    bytes per cluster.
    We have tried to change it with 8K & 64K but in OS it shows only in
    512 Bit per sector. I have tried to get hotfixes for the same but i didn't got it for server 2012.
    Please help me to resolve this.
    Balkrushna

    Have you looked at this:
    http://msdn.microsoft.com/en-us/library/windows/desktop/hh848035(v=vs.85).aspx

  • Essbase and Stripe Sizing

    Has anyone ever come accross information describing the best performing stripe size to implement when Essbase is writing to a RAID-configured device? I'm not looking for comparisons between RAID 0+1 verus RAID5, etc. I'm interested in the stripe size (sector size).
    Thanks

    I would suggest you go with a 64k or 128k sector size. The relationship to Essbase performance likely corresponds to your block size of your application such that the larger the block size the more larger size would help.
    To do this question justice some nice benchmarking would need to be performed.
    Anyone else have thoughts?
    Regards,
    -John

Maybe you are looking for

  • Can't fuse music from iPod to new PC iTunes Library - please help!!

    I recently moved to a new country and transferred all my music on my iPod. Just bought a new PC and installed my iTunes, but can't figure out how to get my iPod music onto my Library. Please help, I'm getting frustrated!!!

  • Error downloading and installing Yosemite

    Hello everyone, I have a problem. I downloaded about 5 times yosemite (5,16gb) and all 5 times downloaded gave me the same mistake at the time of 'installation this one : "Failed to verify this copy of the installation of OS X Yosemite. copying may h

  • How do you know who in contacts can receive text messages on iPad?

    I upgraded my iOS to 5 on my original iPad. Now I have messaging. However, I have no idea who in my 300 contacts have signed up.  Some apps have a way of searching your contacts or Facebook friends to see if they have access to certain games like Wor

  • Delivery split through VLSP along with JIT Calls with AW Partner profiles

    Hi SAP SD/LE Guru's, I have one critical issue to resolve immediately in my implementation of SAP 4.6C system in multinational automotive industry. It's in the area of Delivery split along with AW - Duns number partner profile and JIT calls. Here is

  • Sort order in the Busines Model layer

    HI, When you specify a sort order in the Busines Model layer, it is ignored on Pivot Tables. Can we please get Pivot Tables to use the sort order specified in the BM layer? Thanks and Regards