Algorithms such as ALG_RSA_ISO9796

How can I know what EXACTLY this algorithm does ?
The documentation says "Cipher algorithm ALG_RSA_ISO9796 provides a cipher using RSA. Input data is padded according to the ISO 9796 (EMV'96) scheme.".
I know EMV and RSA pretty well, what I want to know is what EXACTLY is done. In that case, I would guees that it computes a SHA hash on the message, splits i, add a header and a trailer and then sign (like in EMV).
Where can I find :
1) the exact definition of the ciphers (this one and others)
2) test vectors
Thanks in advance.

EMV'96 not only defines the use of the RSA algorithm for digital signature. It also can be used to cipher general data.
See annex E (for digital signature) and F (for ciphering).
From my point of view, ALG_RSA_ISO9796 doesn't pad the data (excepts for the 0's adding) for odd exponents, but it have a different ciphering behaviour for even exponents.
That is what this contant define.

Similar Messages

  • Generate SSL cert with stronger signature algorithm such as RSA-SHA 1 or SHA 2 from Certificate Authority Version: 5.2.3790.3959

    We have a Certificate Authority (Version: 5.2.3790.3959) configured on  Windows 2003 R2 server in our environment. How do i generated SSL cert with stronger signature algorithm such as with SHA1 or SHA2
    Currently i am only able to generate SSL cert with md5RSA.

    Hi,
    Since you are using Windows Server 2003 R2 as CA, the hash algorithm cannot be changed, while in Windows 2008 and 2008 R2, changing the hash algorithm is possible.
    Therefore, you need to build a new CA to use a new algorithm.
    More information for you:
    Is it possible to change the hash algorithm when I renew the Root CA
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/91572fee-b455-4495-a298-43f30792357e/is-it-possible-to-change-the-hash-algorithm-when-i-renew-the-root-ca?forum=winserversecurity
    Changing public key algorithm of a CA certificate
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/0fd19577-4b21-4bda-8f56-935e4d360171/changing-public-key-algorithm-of-a-ca-certificate?forum=winserversecurity
    modify CA configuration after Migration
    http://social.technet.microsoft.com/Forums/windowsserver/en-US/0d5bcb76-3a04-4bcf-b317-cc65516e984c/modify-ca-configuration-after-migration?forum=winserversecurity
    Best Regards,
    Amy Wang

  • Tips required from algorithm practitioners on how to learn algorithms

    Hello,
    I am trying to improve my programming skills and more particularly my proficiency at algorithms.
    I decided to start with advanced sorting algorithms such as the quick sort and shell sort.
    Here is what I did in order to practise:
    -Working with a sample of data (array of integers) I started with going through the pseudo-code and manually applying each of the sort algorithms to the data with a sheet of paper in order to better understand those two algorithms.
    -Then I tried to code the algorithms in java (without looking at the code in my book of course) in Netbeans.
    Do you see other types of exercises (for now only about sorting algorithms as I will move to other algorithms later on) I could practise on in order to become more proficient?
    Best regards,
    J.

    Sounds good.
    Only other thing is try different sets of data. And try a really large set of data. Say 10,000 items. Just figuring out how to create that set can prove benefiicial.

  • TIN Creation algorithms in Oracle 11g

    In the Oracle 11g Developers Guide http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28400/sdo_intro.htm#sthref99 it mentions that "Oracle TIN generation software uses the Delaunay triangulation algorithm, but it is not required that TIN data be formed using only Delaunay triangulation techniques." I was wondering how would one go about implementing other TIN Algorithms such as Bowyer-Watson for the updating of existing TIN data in the SDO_TIN feature type.
    The fact that other Algorithms can be selected is also mentioned in the Oracle 11g Spatial book but no guideance on how. Has anyone implemented this either for 2.5D TIN or for full 3D TINs
    Regards
    Sean

    If you have different databases on same node you could choose to not set ORACLE_SID in shell init file but use instead interactive oraenv Oracle script to set the right ORACLE_SID.
    Please read Configuring the Operating System Environment Variables in http://download.oracle.com/docs/cd/E11882_01/server.112/e10897/em_manage.htm#ADMQS12369
    To create easily a database (ie without using your own scripts) try to use DBCA that will automatically take care of database initialization file: http://download.oracle.com/docs/cd/E11882_01/server.112/e10897/install.htm#BABEIAID
    Edited by: P. Forstmann on 5 août 2011 12:01

  • Algorithm practice

    so i am playing around with some different search algorithms, one such one is:
    public static int bubbleUp (String list[], String partNumber)
        // declares comparison to be used as a counter
        int comparisons = 1; 
        // declares moves as a counter to be used later on
        int moves = 0;
        // declares sum of both comprisons and moves variables
        int sumNumbers = 0;
        // gets the variable partNumber and searches for it in list
        for (int i = 0; i < list.length; i++)
          if (!partNumber.equals (list ))
    // add one to comparisons
    comparisons++;
    else
    //if statement used to add the appropriate number of moves to the counter
    if (!partNumber.equals (list [0]))
    list [i] = list [i - 1];
    list [i - 1] = partNumber;
    moves+=2;
    break;
    // calculates sum of moves and comparisons
    sumNumbers = comparisons + moves;
    // return the number of comparisons
    return sumNumbers;
    now i am trying to use a two dimensional array rather than a single to search for different objects.
    My question is, how would you change my code i wrote above to function for 2d arrays?

    There's a saying: "code to the interface, not the implementation" and
    that saying can be applied here; all you want to know in your algorithm is this:public interface BubbleUpable {
       public Object get(int index);
       public void set(int index, Object obj);
       public int length();
    }One dimensional arrays can be wrapped in a simple wrapper class like this:public class ArrayWrapper implements BubbleUpable {
       private String[] a;
       public ArrayWrapper(String[] a) { this.a= a; }
       // interface implementation
       public Object get(int index) { return a[index]; }
       public void set(int index, Object obj) { a[index[= (String)obj; }
       public int length() { return a.length; }
    }I'm sure a bit of Java 1.5 can take away a bit of explicit casting by applying
    some generics here.
    If you rewrite your algorithm such that it uses a BubbleUpable instead
    of a String[] array, you can implement another wrapper (see above) that
    wraps around a two dimensional array. Your algorithm wouldn't care at
    all about the implementation though.
    kind regards,
    Jos

  • Sine-fit algorithm

    Hi,
    I  would like to fit my data (about 200 kpoints) with a sine-wave (with 4 parameters : A.sin(2pi.B.t + C) + D) in LabVIEW. I'm looking for a VI with a convergent algorithm (such as least-square solver) which return me the 4 parameters + the RMS error betwen measurement data and model data. Or why not a more efficient algorithm...
    I have a look in "Optimisation function set" and also in "Ajustement function set" but I didn't find the good VI, Someone can advice me ?
    Regards,
    Benjamin

    You can use the levenberg-marquard fit , but usually the extract single tone information work good enough
    There is a paper from 99 ni'week about the performance of this vi, I added it, because I missed the link
    Depending on your signal, it might be wise to try blocks of complete periods and mean the results....
    I attached a vi that uses this single tone vi to monitor the line frequency , just abuse a headphone as a coil receiver and plug it into the mic input of your soundcard and place the headphone near a transformer...
    Greetings from Germany
    Henrik
    LV since v3.1
    “ground” is a convenient fantasy
    '˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'
    Attachments:
    line freq with soundcard v2.vi ‏35 KB
    FFT tone detection Moriat NIWeek99.ppt ‏1159 KB

  • Lightroom 4b - basic tone controls are not intuitive, and need some work (in my opinion).

    I've come up with a variety of settings that I consider optimal in PV2012 that turn out to be totally counter intuitive.
    I don't think this is a good thing.
    One should be able to:
    - up the exposure when it's underexposed, and down the exposure when it's over-exposed.
    - adjust shadows and highlights (and midtone balance) to taste...
    - fine tune darkest tones by adjusting blacks slider (maybe have a separate control for adjusting black clipping point from the control that is used for stretching or compressing darkest tones most, or just use tone curve...).
    - fine tune lightest tones by adjusting whites slider (we already have a separate control for adjusting white clipping point: exposure).
    That is not how Lr4b is working for me at all.
    Having to radically increase exposure to brighten the shadows, then set the highlights to an extreme negative value when the net effect ends up being brighter highlights, is indeed *very* counter-intuitive.
    Conversely, I've had cases when I've had to radically reduce exposure on photos that weren't particularly over-exposed, in order to get the balance of mid-tones/shadows/highlights to work out. - very counter-intuitive, and took me a long time to get it right.
    See related thread: http://feedback.photoshop.com/photoshop_family/topics/lightroom_4b_improve_basic_tone_cont rols
    PS - I've also found myself adjusting blacks or whites in order to reach further into the midtones, instead of shadows & highlights - this is just bass freakun' ackwards!...
    Rob

    Hi Bill,
    All input is welcome.
    Whites slider is the exposure slider's second cousin. Like exposure, it affects all tones, but it affects brightest tones more than exposure. It's almost like the right-end equivalent to fill-light in Lr3. If you just want to get overall exposure + shadows and highlights in the ballpark, it can be done very quickly with Lr4b. I however always want to control the exact brightness of each and every tone, from black to white, and do it in such a way that there are no "flat" regions or desaturated regions. That's when things get tricky. For example, what if you have a photo whose tones are in the ball park, but the middle tone is not bright enough. This happens a lot to people who under-expose in the interest of preserving highlights, then need to bring up the brightness of the whole picture, except for the whites, bottom end mostly. In Lr3, you increase fill light, and if it brightens too much somewhere you decrease brightness there. In Lr4b, increasing the shadows won't reach it. You can get at it by increasing the blacks, but then that unanchors the black point (and messes up the darkest tones which weren't anywere near the midtone target). You can't pull it rightward using highlights, because that won't reach either, and anyway will overbrighten highlights. You can pull it leftward using whites, but then your whites may become too bright. So, what do you do? increase exposure. But then that blows out the whole top end. So what do you do? Decrease highlights. But now you've just affected a bunch of other tones which were not near the tones that were the original target of your desired adjustment. It's all very squirrelly and non-intuitive and requires iteration. Note: I've spent several dozens hours experimenting, I'm not an Lr4b newbie by a long shot - granted, I'm also still very much learning.
    I really think Adobe should consider altering the algorithm such that one can use the slider that represents the tones closest to the desired target tone, then adjust the next closest sliders as compensation... - Everything blending as gracefully as possible with the adjacent zone.
    e.g. (not talking about setting white and black point, which should be independent of all this)
    - blacks: adjusts darkest tones most, shadow tones next most, midtones next most, highlight tones next most, and lightest tones least.
    - shadows: adjusts dark tones most, black tones and midtones next most...
    - midtones: adjusts midtones most, dark and light tones next most...
    - highlights: adjusts light tones most, midtones and lightest tones next most...
    - whites: adjusts lightest tones most, light tones next most, midtones next most...
    Kinda like the tone curve, except with the benefits of the magic logic for recombining tones, which is what makes adjusting tone via the tone controls different than using the tone curve in the first place.
    This way, if you want to add fill light into the deepest regions, you increase the blacks most, if that brightens the shadows too much, then adjust the shadows a little, if that affects the midtones too much, then adjust the midtones a tiny bit, and if that affects the highlights a smidge, then adjust the highlights by the tiniest of smidges. Affect on whites by blacks slider is probably negligible.
    In other words, make the implementation match the zones as you see them in the histogram. For example, in the histogram we have:
    - blacks
    - shadows
    - exposure
    - highlights
    - whites
    Except they don't really do what's suggested by that zone graph:
    - black slider adjust midtones and light tones more than shadow slider does.
    - shadow slider hits a midtone wall.
    - exposure affects lighter tones even more than midtones.
    - highlights hits a midtone wall.
    - whites adjusts midtones and dark tones more than highlights slider.
    Is it possible to get exactly what you want with this scheme? - yes.
    Is it tricky as he|| - yes.
    So Bill, if Lr4b is working for you, then more power to ya. I can make it work for me too. But I think the implementation is not very intuitive, requires mutliple iterations which can take a lot of time, and there may be room for improvement - thus this thread.
    Rob

  • Is this a bug? Stroke does not display for rectangles?

    Working in a room full of new iMacs with CS6, I used the rectangle tool to draw shapes on screen and the shapes would not completely display. It was almsost like other shapes were covering parts of the shapes so for example only 2 of 4 lines on a rectangle would show, sometimes only 3 lines would show, etc. I tried changing the stroke from size 1 to 2 and sometimes that helped, sometimes closing the file and opening a new one helped, and other times nothing helped, even making the stroke a size 4 or 5. This problem occurred on multiple iMacs.
    Is there a setting or preference that may be causing this? Is this a known problem?
    Thanks.

    Yeah, it's not a bug, but I do consider it a major weakness in Fireworks screen rendering behavior. Unlike AI, Fireworks is constantly in "Pixel Preview" mode, and it's locked into Nearest Neighbor zoom rendering—possibly the crudest of rendering algorithms. This might have been fine in the early days of web design, or even several years ago, but if you're using the app to design graphics for a high-resolution device (e.g. a retina display) on a standard resolution monitor, and you'd like to view the graphic at near the actual size (e.g. 60%), the results look awful.
    I think this could be fixed by offering an additional View option (e.g. Improved Rendering) that would use a different rendering algorithm, such as Bicubic or even a mixture of Bicubic Smoother/Bicubic Sharper.
    I created a post about this issue quite a few months back; however, I may not have submitted a feature request, because I was still using Fireworks 8 at the time and wasn't sure if CS5 had addressed the problem:
    http://forums.adobe.com/message/4335629
    You can submit a feature request to Adobe regarding this issue using their official online reporting form:
    https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform
    There is at least one workaround: If you define an object as a Symbol and then resize the instance on the canvas, it will be rendered using the algorithm specified in FW's Preferences (under General > Interpolation). I think this works anyway; it's been a while since I tried it.

  • Cisco ACS 4.2 and Windows 2008 R2 CA

    Has anyone been successfully in getting a cert off of a 2008 R2 CA and imported correct in to ACS 4.2?  I've had and have seen other have the problem with creating a web server certificate from R2 (1024 bit) and putting it in ACS 4.2 only to have HTTPS/SSL no longer work correctly.  I haven't even tested the intended purpose of the cert (EAP-TLS) yet, so who knows if that works.  I've also seen through searching where some one was able to take a 2003 CA web server template and put it into R2 and it work, but I know longer have 2003 available?  Any ideas?
    Thanks,
    Raun

    I have seen issues where the template on the R2 boxes are using elliptical curve cryptography, basically if the template has a '#" charcter in it is what I think causes this process to be used. Try to use a template that doesnt have this in the front and then try to generate a cert against the template you created.
    Here is a snip of the guide that I am forwarding you:
    Determining Whether to Implement Cryptography Next Generation Algorithms
    For Windows Server 2008–based version 3 certificate  templates, the option exists to configure advanced cryptographic  algorithms such as elliptic curve cryptography (ECC). Before configuring  these settings, ensure that the operating systems and applications  deployed in your environment can support these cryptographic algorithms.
    http://technet.microsoft.com/en-us/library/cc731705%28v=ws.10%29.aspx
    Screenshots in another article:
    http://technet.microsoft.com/en-us/library/cc725621%28v=ws.10%29.aspx
    Thanks,
    Tarik Admani

  • Java Security Model: Java Protection Domains

    1.     Policy Configuration
    Until now, security policy was hard-coded in the security manager used by Java applications. This gives us the effective but rigid Java sandbox for applets.A major enhancement to the Java sandbox is the separation of policy from mechanism. Policy is now expressed in a separate, persistent format. The policy is represented in simple ascii, and can be modified and displayed by any tools that support the policy syntax specification. This allows:
    o     Configurable policies -- no longer is the security policy hard-coded into the application.
    o     Flexible policies -- Since the policy is configurable, system administrators can enforce global polices for the enterprise. If permitted by the enterprise's global policy, end-users can refine the policy for their desktop.
    o     Fine-grain policies -- The policy configuration file uses a simple, extensible syntax that allows you to specify access on specific files or to particular network hosts. Access to resources can be granted only to code signed by trusted principals.
    o     Application policies -- The sandbox is generalized so that applications of any stripe can use the policy mechanism. Previously, to establish a security policy for an application, an developer needed to implement a subclass of the SecurityManager, and hard-code the application's policies in that subclass. Now, the application can make use of the policy file and the extensible Permission object to build an application whose policy is separate from the implementation of the application.
    o     Extensible policies -- Application developers can choose to define new resource types that require fine-grain access control. They need only define a new Permission object and a method that the system invokes to make access decisions. The policy configuration file and policy tools automatically support application-defined permissions. For example, an application could define a CheckBook object and a CheckBookPermission.
    2.     X.509v3 Certificate APIs
    Public-key cryptography is an effective tool for associating an identity with a piece of code. JavaSoft is introducing API support in the core APIs for X.509v3 certificates. This allows system administrators to use certificates from enterprise Certificate Authorities (CAs), as well as trusted third-party CAs, to cryptographically establish identities.
    3.     Protection Domains
    The central architectural feature of the Java security model is its concept of a Protection Domain. The Java sandbox is an example of a Protection Domain that places tight controls around the execution of downloaded code. This concept is generalized so that each Java class executes within one and only one Protection Domain, with associated permissions.
    When code is loaded, its Protection Domain comes into existence. The Protection Domain has two attributes - a signer and a location. The signer could be null if the code is not signed by anyone. The location is the URL where the Java classes reside. The system consults the global policy on behalf of the new Protection Domain. It derives the set of permissions for the Protection Domain based on its signer/location attributes. Those permissions are put into the Protection Domain's bag of permissions.
    4.     Access Decisions
    Access decisions are straightforward. When code tries to access a protected resource, it creates an access request. If the request matches a permission contained in the bag of permissions, then access is granted. Otherwise, access is denied. This simple way of making access decisions extends easily to application-defined resources and access control. For example, the banking application allows access to the CheckBook only when the executing code holds the appropriate CheckBookPermission.
    Sandbox model for Security
    Java is supported in applications and applets, small programs that spurred Java's early growth and are executable in a browser environment. The applet code is downloaded at runtime and executes in the context of a JVM hosted by the browser. An applet's code can be downloaded from anywhere in the network, so Java's early designers thought such code should not be given unlimited access to the target system. That led to the sandbox model -- the security model introduced with JDK 1.0.
    The sandbox model deems all code downloaded from the network untrustworthy, and confines the code to a limited area of the browser -- the sandbox. For instance, code downloaded from the network could not update the local file system. It's probably more accurate to call this a "fenced-in" model, since a sandbox does not connote strict confinement.
    While this may seem a very secure approach, there are inherent problems. First, it dictates a rigid policy that is closely tied to the implementation. Second, it's seldom a good idea to put all one's eggs in one basket -- that is, it's unwise to rely entirely on one approach to provide overall system security.
    Security needs to be layered for depth of defense and flexible enough to accommodate different policies -- the sandbox model is neither.
    java.security.ProtectionDomain
    This class represents a unit of protection within the Java application environment, and is typically associated with a concept of "principal," where a principal is an entity in the computer system to which permissions (and as a result, accountability) are granted.
    A domain conceptually encloses a set of classes whose instances are granted the same set of permissions. Currently, a domain is uniquely identified by a CodeSource, which encapsulates two characteristics of the code running inside the domain: the codebase (java.net.URL), and a set of certificates (of type java.security.cert.Certificate) for public keys that correspond to the private keys that signed all code in this domain. Thus, classes signed by the same keys and from the same URL are placed in the same domain.
    A domain also encompasses the permissions granted to code in the domain, as determined by the security policy currently in effect.
    Classes that have the same permissions but are from different code sources belong to different domains.
    A class belongs to one and only one ProtectionDomain.
    Note that currently in Java 2 SDK, v 1.2, protection domains are created "on demand" as a result of class loading. The getProtectionDomain method in java.lang.Class can be used to look up the protection domain that is associated with a given class. Note that one must have the appropriate permission (the RuntimePermission "getProtectionDomain") to successfully invoke this method.
    Today all code shipped as part of the Java 2 SDK is considered system code and run inside the unique system domain. Each applet or application runs in its appropriate domain, determined by its code source.
    It is possible to ensure that objects in any non-system domain cannot automatically discover objects in another non-system domain. This partition can be achieved by careful class resolution and loading, for example, using different classloaders for different domains. However, SecureClassLoader (or its subclasses) can, at its choice, load classes from different domains, thus allowing these classes to co-exist within the same name space (as partitioned by a classloader).
    jarsigner and keytool
    example : cd D:\EicherProject\EicherWEB\Web Content jarsigner -keystore eicher.store source.jar eichercert
    The javakey tool from JDK 1.1 has been replaced by two tools in Java 2.
    One tool manages keys and certificates in a database. The other is responsible for signing and verifying JAR files. Both tools require access to a keystore that contains certificate and key information to operate. The keystore replaces the identitydb.obj from JDK 1.1. New to Java 2 is the notion of policy, which controls what resources applets are granted access to outside of the sandbox (see Chapter 3).
    The javakey replacement tools are both command-line driven, and neither requires the use of the awkward directive files required in JDK 1.1.x. Management of keystores, and the generation of keys and certificates, is carried out by keytool. jarsigner uses certificates to sign JAR files and to verify the signatures found on signed JAR files.
    Here we list simple steps of doing the signing. We assume that JDK 1.3 is installed and the tools jarsigner and keytool that are part of JDK are in the execution PATH. Following are Unix commands, however with proper changes, these could be used in Windows as well.
    1. First generate a key pair for our Certificate:
    keytool -genkey -keyalg rsa -alias AppletCert
    2. Generate a certification-signing request.
    keytool -certreq -alias AppletCert > CertReq.pem
    3. Send this CertReq.pem to VeriSign/Thawte webform. Let the signed reply from them be SignedCert.pem.
    4. Import the chain into keystore:
    keytool -import -alias AppletCert -file SignedCert.pem
    5. Sign the CyberVote archive �TeleVote.jar�:
    jarsigner TeleVote.jar AppletCert
    This signed applet TeleVote.jar can now be made available to the web server. For testing purpose we can have our own test root CA. Following are the steps to generate a root CA by using openssl.
    1. Generate a key pair for root CA:
    openssl genrsa -des3 -out CyberVoteCA.key 1024
    2. Generate an x509 certificate using the above keypair:
    openssl req -new -x509 -days key CyberVoteCA.key -out CyberVoteCA.crt
    3. Import the Certificate to keystore.
    keytool -import -alias CyberVoteRoot -file CyberVoteCA.crt
    Now, in the step 3 of jar signing above, instead of sending the request certificate to VeriSign/Thawte webform for signing, we 365 - can sign using our newly created root CA using this command:
    openssl x509 -req -CA CyberVoteCA.crt -CAkey CyberVoteCA.key -days 365 -in CertReq.pem -out SignedCert.pem �Cacreateserial
    However, our test root CA has to be imported to the keystore of voter�s web browser in some way. [This was not investigated. We used some manual importing procedure which is not recommended way]
    The Important Classes
    The MessageDigest class, which is used in current CyberVote mockup system (see section 2), is an engine class designed to provide the functionality of cryptographically secure message digests such as SHA-1 or MD5. A cryptographically secure message digest takes arbitrary-sized input (a byte array), and generates a fixed-size output, called a digest or hash. A digest has the following properties:
    � It should be computationally infeasible to find two messages that hashed to the same value.
    � The digest does not reveal anything about the input that was used to generate it.
    Message digests are used to produce unique and reliable identifiers of data. They are sometimes called the "digital fingerprints" of data.
    The (Digital)Signature class is an engine class designed to provide the functionality of a cryptographic digital signature algorithm such as DSA or RSA with MD5. A cryptographically secure signature algorithm takes arbitrary-sized input and a private key and generates a relatively short (often fixed-size) string of bytes, called the signature, with the following properties:
    � Given the public key corresponding to the private key used to generate the signature, it should be possible to verify the authenticity and integrity of the input.
    � The signature and the public key do not reveal anything about the private key.
    A Signature object can be used to sign data. It can also be used to verify whether or not an alleged signature is in fact the authentic signature of the data associated with it.
    ----Cheers
    ---- Dinesh Vishwakarma

    Hi,
    these concepts are used and implemented in jGuard(www.jguard.net) which enable easy JAAS integration into j2ee webapps across application servers.
    cheers,
    Charles(jGuard team).

  • Where is the "blur kernel" one click deblur in Photoshop CS6?

    Where is the "blur kernel” one click deblur in Photoshop CS6? (Pretty sure it’s not there.)
    Is deblur still in development? Or was it killed? Is there any status update on the progress of the deblur function? An overwhelming majority of the information available reference the same October 2011 Adobe Max conference demo.
    (Adobe showed off the new deblur prototype at its Adobe Max conference in October during a set of feature sneak peeks...)
    The way the feature was described was:
    "The prototype feature uses a computer algorithm to analyze how the image was blurred and then creates what Adobe called a "blur kernel." After the kernel is generated, it can show you information such as what the motion trajectory of the camera was while the shutter was open -- causing the blur in the first place. The next step is to simply hit a "restore sharp image" button and the photograph is fixed."
    It was cited by some sources as being an upcoming Photoshop CS6 feature while other sources were more vague with statements resembling "It's not clear if or when the new unblur feature will make it into a future version of Photoshop, as the company warned all sneak peek features were just prototypes with no concrete product plans."
    Where is the latest with the Adobe Photoshop "blur kernel” and the Adobe Photoshop one click deblur feature?

    one of the things I think about as an engineer is that in order for something like this to work,  you should probably start with manuals adjustments to processing the the out-of-focus blur first. after you have finished with this, you can do the motion blur.  I would have to really think hard about it to see if there is an algorithm such as a kernel or something to convert an image into a path. one problem is you are stuck with the image boundaries. if they weren't there, things would be so much easier (but it would also take infinitely long to process).  the image boundaries MIGHT skew the kernel, depending on how you handle the boundaries (for example, usually you just assume those values are black or 0's, or you may choose a 50% gray, whatever fits best with the kernel you are working with).
    adobe may be on to something there, but they only had half of the equation with the idea.
    I used to tinker with image processing many yarns ago back for fun in BASICA/GWBASIC.
    if you have the kernel
    [-2 0 2]
    [-2 0 2]
    [-2 0 2]
    which amplifies vertical lines, you would want to handle the boundaries by processing them as 0's.
    NOTE TO ADOBE: it might start getting into Machine Vision technology when you start doing the path recognition. or AI.
    if you can get the imge into a path kernel, it's essentially a bitmap of a path.  you can trace that path using akima spline curves. (will lose 1-2 samples off each end), but the tracing program might just exist out there somewhere.  just google "bitmap to akima spline curve".  and akima spline curve has all the
    points along its path, unlike a bezier. I like akima spline curves, I have been trying to get ps engineers to use them for a while for good reason. this is one of them.
    there are also cubic splines as an alternative. 
    chances are that the curve will have rounded corners due to the weight of the camera.
    now that you have the path,it's simply a matter of taking samples along the curve wherever you want. (usually in an array which you can linearly inerpolate across the points in the array for simplicity if you want)

  • How to create a demo with time limit?  (timebomb)

    Hello,
    I would like to create a demo of a game I made but want to limit the amount of time it will run once installed.  (30 days).
    Is there an xtra or a straightforward way to do this in lingo?

    there is no straightforward way to do it in any programming language. I'm currently developing a Director-specific trial version solution, but it's not ready yet for general/commercial use. Are you looking for a free solution or are you willing to pay?
    Trial Versions are used so users can try out your software and then buy it if the software is something they want. So, there are two parts to trial software;
    1. the code that handles the trial version
    2. the code that handles the product serial and registration/licence keys
    The code that handles the trial version needs to be able to do these things:
    1. Keep track of the trial period. This involves recording the first run date/time and incrementing that date/time when the software is launched each subsequent time and comparing that against the trial period.
    2. Protect against backdating - turning back the computer clock to get more trial time.
    3. Protect against uninstall/reinstall.
    Dealing with all three of these issues becomes complicated; let's look at a solution using a licence file. What about a licence file will solve all the issues listed above?
    1. The licence file will contain the First Run Date (FRD), the Last Run Date (LRD), and other specific user and product info.
    2. A licence file should be encrypted to ensure it cannot be tampered with.
    3. A licence file should be moved to the user's AppData (or equivalent) folder by the installer software (such as NSIS). This ensures the first run date is only ever recorded once... what do I mean by this? If your app had to check first before opening and writing to the license file, then someone could easily circumvent the trial version by deleting the file at which point your application would be forced to move/write the licence file again and the user would be able to start the trial over again. Well, the installer only ever runs once. So, if it copies over the licence file during the installation process then our application, when it runs the first time, only has to check if the file is there; if not then the trial has been tampered with, if it's there then read the FRD, and if blank then we can be sure it's writing the FRD for the first and only time.
    4. The file could be hidden to ensure that an un/reinstallation of your software wouldn't circumvent your trial security. Another method is to write to the registry some entry that's obscure or looks like some other info for your software... this is what's called security by obscurity which is frowned upon in the industry, in general, but there's really no way around it in the case of trial software methods.
    5. A licence file can be used as a red-herring if you want to use an obscure/hidden registry entry to save the real trial version information. If that's the case, you should follow the same steps as above and have your installer write the initial registry entries so it cannot be as easily circumvented by your code checks.
    Encryption is a big part of trial version security. If you're using MX2004 or above then you can find javascript ports of some strong encryption algorithms such as AES or RSA. RSA is an asymmetrical (ie public/private key) encryption algorithm whereas AES is a symmetrical (ie. private key) algorithm. The important things to look for in an implementation are ones that will encrypt a variable length string. These implementations use modes to encrypt fixed length chunks of the string, thus making the core encryption algorithm useful in practical situations such as encrypting variable length strings.
    Creating a trial/registration .dir file (trial window) that gets published with the main application .dir file:
    Below, I've outlined these steps so you can see what I mean. I've tested this process and it's solid, AFAIK. Here are the steps:
    1. Create a project and name it movie1. Add a framescript with the 'go the frame' code on it.
    2. Add a label named 'continue' on the frame just after the 'go the frame' framescript.
    3. Add a button, and this code:
    go to frame "continue"
    4. Now, go into the Publish Settings options and go to the 'Files' tab and add any of your other .dir files you want under the 'Additional Files:' heading (the option to Play every movie in list will be checkmarked by default. Leave it like that).
    5. Now, Publish your project and you will have ONE .exe called movie1.exe which contains two .dir files that have been published.
    6. Open your User Temp folder and observe the folder that's created when you run movie1.exe... no temp .dir file is created when you run movie1.exe
    7. Creating another .dir with this code:
    go to frame "continue" of movie("movie1")
    ...does not work as it's looking for a .dir or .dxr or .dcr and will not work with an .exe.
    What this means is you can create a single executable that runs the trial window with all your trial version code and if everything checks out in the licence file then you can go to the main application movie. The trial window can be used to display the trial information, including a registration section, if you like. Google examples of trial version software to get ideas of what should be included in a trial version display.
    Resources:
    AES encryption written in Javascript: http://www.movable-type.co.uk/scripts/aes.html
    DOUG Article I wrote on creating product keys: http://director-online.com/forums/read.php?2,22279,22303#msg-22303
    Block Cipher Modes: http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Counter_.28CTR.29
    AES - Wikipedia: http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
    RSA - http://en.wikipedia.org/wiki/RSA
    Pseudocode for recording and comparing dates
    FRD = First Run Date
    LRD = Last Run Date -- this one would get updated every time the app was run.
    CRD = the systemDate
    NumDays = 30 -- the number of days for trial version
    dateGood = False
    If LRD > CRD Then
    -- the system clock was rolled back
    dateGood = False
    Else If (the systemDate) > (NumDays + FRD) Then
    -- trial version has expired
    dateGood = False
    Else If CRD > LRD Then
    -- everything is ok, so write a new LRD date in registry or wherever else
    dateGood = True
    Else If CRD = (_movie.systemDate) Then
    -- the dates are both good
    dateGood = True
    Else
    dateGood = False
    End If
    Typical Place to Write Application data to the Registry:
    HKCU\Software\<AppName>\<version>\
    eg. HKCU\Software\TRiShield\1.0\

  • SSL / X.509 In SOAP Sender/Receiver Adapter

    Hi Friends,
    We have few third party Java based systems which need to integrate with SAP PI7.1
    For this we are using
    SOAP Sender from Third PartyTo PI
    SOAP Receiver From Pi To Third Party Systems                                 
    The Customer Wants to implement SSL.X>509 certificates for encryption and decryption. as one of the option.
    we are  Facing few issues like.
       I am assuming each of the source system webservice calls will have
    to use a username/password to authenticate with the PI system
    a.       Will this use 'basic authentication', ie., credentials sent over as
    part of the HTTP header field ?
       i.
    Assuming we use SSL for transport level security - this is still not secure as the credentials are not encrypted
    ii.      Is  there a way to send in encrypted credentials and for the PI layer to decrypt the same, validate and process the request ?
    b.      Should we consider using a single sign-on mechanism ?
    c         Should we consider using X.509 digital certificates ?
          i.      This would require that the X.509 certs are maintained in the Source & PI webserver Java key stores
    d.      Should we also consider digitally signing the payload ?
         i.      This  requires using an appropriate hashing algorithm such as SHA-1 or MD5
    SOAP Sender /receievr Adapter has few properties not specific to them.How to Acheive this.
    Regards
    Chandra Dasari

    Hi Chandra,
    You may try to implement this using the AXIS framework of the SOAP adapter. This provides functionality for handling of X.509 encryption and decryption.
    You can generate/get the digital certificate and use it for both transport level as well as message level security. You would not require any additional encoding apart from this.
    Coming to your queries:
    Q - I am assuming each of the source system web service calls will have to use a username/password to authenticate with the PI system
    A - If you are using a certificate, then they can call XI using this certificate. You can share your public certificate with each of the parties.
    Q. Will this use 'basic authentication', ie., credentials sent over as part of the HTTP header field?
    A - Depends...if you are using basic authentication, then it will not be via X.509. It will be the normal process. These two are two different things.
    Q. Assuming we use SSL for transport level security - this is still not secure as the credentials are not encrypted
    A - This problem is resolved if you are using digital certificates.
    Q. Is there a way to send in encrypted credentials and for the PI layer to decrypt the same, validate and process the request?
    A - Yes. It is possible. But then you will have to implement encryption decryption logic at both the ends separately if you are not using certificates.
    Q. Should we consider using a single sign-on mechanism?
    A - Is your third party part of your landscape? if not then you might want to check and confirm this approach with your security adviser.
    Q Should we consider using X.509 digital certificates?
    A - Yes...This would resolve most of your problems.
    Q. This would require that the X.509 certs are maintained in the Source & PI web server Java key stores
    A - Yes.
    Q. Should we also consider digitally signing the payload?
    A - If you require message level encryption along with transport layer.
    Q. This requires using an appropriate hashing algorithm such as SHA-1 or MD5. SOAP Sender /receiver Adapter has few properties not specific to them.How to achieve this.
    A - You can provide this option while generating the certificate itself.
    Please let me know if this helps.
    Cheers,
    Sarath.

  • SSRS 2012 Problem understanding View State Validation steps

    Hi,
    ***** Note I have put my question on Bold to make it easier****
    I am trying to Implement this solution on our systems and need help on how to set it up ?
    Pasted from 
    http://technet.microsoft.com/en-us/library/cc281307.aspx?lc=1033
    How to Configure View State Validation
    To run a scale-out deployment on an NLB cluster, you must configure view state validation so that users can view interactive HTML reports. You must do this for the report server and for Report Manager.
    View state validation is controlled by the ASP.NET. By default, view state validation is enabled and uses the identity of the Web service to perform the validation. However, in an NLB cluster scenario, there are multiple service instances and web service
    identities that run on different computers. Because the service identity varies for each node, you cannot rely on a single process identity to perform the validation.
    To work around this issue, you can generate an arbitrary validation key to support view state validation, and then manually configure each report server node to use the same key. You can use any randomly generated hexadecimal sequence. The validation algorithm
    (such as SHA1) determines how long the hexadecimal sequence must be.
    1.
    Generate a validation key and decryption key by using the autogenerate functionality provided by the .NET Framework.(Well, how to generate Validation key using .Net Framework?)
    In the end, you must have a single <machineKey> entry that you can paste into the Web.config file for each Report Manager instance in the scale-out deployment. 
    The following example provides an illustration of the value you must obtain. Do not copy the example into your configuration files; the key values are not valid.
     Copy Code
    <machineKey validationKey="123455555" decryptionKey="678999999" validation="SHA1" decryption="AES"/>
    2.
    Open the Web.config file for Report Manager, and in the <system.web> section paste the <machineKey> element that you generated. By default, the Report Manager Web.config file is located in \Program Files\Microsoft SQL Server\MSRS10_50.MSSQLSERVER\Reporting
    Services\ReportManager\Web.config.
    3.
    Save the file.
    4.
    Repeat the previous step for each report server in the scale-out deployment. 
    5.
    Verify that all Web.Config files in the \Reporting Services\Report Manager folders contain identical <machineKey> elements in the <system.web> 
    Does the key generate using above generates the key with same element ?
    Any help on this would be appreciated .
    Thank you !
    Thanks

    Hi SQL_Help:
    Per my understanding that you have some question with the steps descript about "How to Configure View State Validation", you don't know how to generate Validation key and also not clear if all the Web.Config files will add the same
    code, right?
    We have many method to generate the validation key,details information below for your reference:
    Generate either from the machineKey generator utility from
    http://aspnetresources.com/tools/keycreator.aspx or  your very own utility or from this link: http://www.eggheadcafe.com/articles/GenerateMachineKey/GenerateMachineKey.aspx
    We can add some code to generate the key, details steps and sample code in article below for your reference:
    How to create keys by using Visual C# .NET for use in Forms authentication
    We should add the same code which include the generated key in all the web.config file in each server node. 
    More Details information you can reference to the rticle below:
    https://msdn.microsoft.com/en-us/library/ff649308.aspx
    If you still have any problem, please feel free to ask.
    Regards,
    Vicky Liu
    Vicky Liu
    TechNet Community Support

  • RSA decryption Error: Data must start with zero

    Because of some reasons, I tried to use RSA as a block cipher to encrypt/decrypt a large file. When I debug my program, there some errors are shown as below:
    javax.crypto.BadPaddingException: Data must start with zero
         at sun.security.rsa.RSAPadding.unpadV15(Unknown Source)
         at sun.security.rsa.RSAPadding.unpad(Unknown Source)
         at com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:356)
         at com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:394)
         at javax.crypto.Cipher.doFinal(Cipher.java:2299)
         at RSA.RRSSA.main(RRSSA.java:114)
    From breakpoint, I think the problem is the decrypt operation, and Cipher.doFinal() can not be operated correctly.
    I searched this problem from google, many people met the same problem with me, but most of them didn't got an answer.
    The source code is :
    Key generation:
    package RSA;
    import java.io.FileOutputStream;
    import java.io.ObjectOutputStream;
    import java.security.KeyPair;
    import java.security.KeyPairGenerator;
    import java.security.PrivateKey;
    import java.security.PublicKey;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    public class GenKey {
          * @param args
                     * @author tang
         public static void main(String[] args) {
              // TODO Auto-generated method stub
                 try {
                      KeyPairGenerator KPG = KeyPairGenerator.getInstance("RSA");
                      KPG.initialize(1024);
                      KeyPair KP=KPG.genKeyPair();
                      PublicKey pbKey=KP.getPublic();
                      PrivateKey prKey=KP.getPrivate();
                      //byte[] publickey = decryptBASE64(pbKey);
                      //save public key
                      FileOutputStream out=new FileOutputStream("RSAPublic.dat");
                      ObjectOutputStream fileOut=new ObjectOutputStream(out);
                      fileOut.writeObject(pbKey);
                      //save private key
                          FileOutputStream outPrivate=new FileOutputStream("RSAPrivate.dat");
                      ObjectOutputStream privateOut=new ObjectOutputStream(outPrivate);
                                 privateOut.writeObject(prKey)
         }Encrypte / Decrypt
    package RSA;
    import java.io.FileInputStream;
    import java.io.FileOutputStream;
    import java.io.ObjectInputStream;
    import java.security.Key;
    import java.util.logging.Level;
    import java.util.logging.Logger;
    import javax.crypto.Cipher;
    //import sun.misc.BASE64Decoder;
    //import sun.misc.BASE64Encoder;
    public class RRSSA {
          * @param args
         public static void main(String[] argv) {
              // TODO Auto-generated method stub
                //File used to encrypt/decrypt
                 String dataFileName = argv[0];
                 //encrypt/decrypt: operation mode
                 String opMode = argv[1];
                 String keyFileName = null;
                 //Key file
                 if (opMode.equalsIgnoreCase("encrypt")) {
                 keyFileName = "RSAPublic.dat";
                 } else {
                 keyFileName = "RSAPrivate.dat";
                 try {
                 FileInputStream keyFIS = new FileInputStream(keyFileName);
                 ObjectInputStream OIS = new ObjectInputStream(keyFIS);
                 Key key = (Key) OIS.readObject();
                 Cipher cp = Cipher.getInstance("RSA/ECB/PKCS1Padding");//
                 if (opMode.equalsIgnoreCase("encrypt")) {
                 cp.init(Cipher.ENCRYPT_MODE, key);
                 } else if (opMode.equalsIgnoreCase("decrypt")) {
                 cp.init(Cipher.DECRYPT_MODE, key);
                 } else {
                 return;
                 FileInputStream dataFIS = new FileInputStream(dataFileName);
                 int size = dataFIS.available();
                 byte[] encryptByte = new byte[size];
                 dataFIS.read(encryptByte);
                 if (opMode.equalsIgnoreCase("encrypt")) {
                 FileOutputStream FOS = new FileOutputStream("cipher.txt");
                 //RSA Block size
                 //int blockSize = cp.getBlockSize();
                 int blockSize = 64 ;
                 int outputBlockSize = cp.getOutputSize(encryptByte.length);
                 /*if (blockSize == 0)
                      System.out.println("BLOCK SIZE ERROR!");       
                 }else
                 int leavedSize = encryptByte.length % blockSize;
                 int blocksNum = leavedSize == 0 ? encryptByte.length / blockSize
                 : encryptByte.length / blockSize + 1;
                 byte[] cipherData = new byte[outputBlockSize*blocksNum];
                 //encrypt each block
                 for (int i = 0; i < blocksNum; i++) {
                 if ((encryptByte.length - i * blockSize) > blockSize) {
                 cp.doFinal(encryptByte, i * blockSize, blockSize, cipherData, i * outputBlockSize);
                 } else {
                 cp.doFinal(encryptByte, i * blockSize, encryptByte.length - i * blockSize, cipherData, i * outputBlockSize);
                 //byte[] cipherData = cp.doFinal(encryptByte);
                 //BASE64Encoder encoder = new BASE64Encoder();
                 //String encryptedData = encoder.encode(cipherData);
                 //cipherData = encryptedData.getBytes();
                 FOS.write(cipherData);
                 FOS.close();
                 } else {
                FileOutputStream FOS = new FileOutputStream("plaintext.txt");
                 //int blockSize = cp.getBlockSize();
                 int blockSize = 64;
                 //int j = 0;
                 //BASE64Decoder decoder = new BASE64Decoder();
                 //String encryptedData = convert(encryptByte);
                 //encryptByte = decoder.decodeBuffer(encryptedData);
                 int outputBlockSize = cp.getOutputSize(encryptByte.length);
                 int leavedSize = encryptByte.length % blockSize;
                 int blocksNum = leavedSize == 0 ? encryptByte.length / blockSize
                           : encryptByte.length / blockSize + 1;
                 byte[] plaintextData = new byte[outputBlockSize*blocksNum];
                 for (int j = 0; j < blocksNum; j++) {
                 if ((encryptByte.length - j * blockSize) > blockSize) {
                      cp.doFinal(encryptByte, j * blockSize, blockSize, plaintextData, j * outputBlockSize);
                      } else {
                      cp.doFinal(encryptByte, j * blockSize, encryptByte.length - j * blockSize, plaintextData, j * outputBlockSize);
                 FOS.write(plaintextData);
                 //FOS.write(cp.doFinal(encryptByte));
                 FOS.close();
    }Edited by: sabre150 on Aug 3, 2012 6:43 AM
    Moderator action : added [ code] tags so as to make the code readable. Please do this yourself in the future.
    Edited by: 949003 on 2012-8-3 上午5:31

    1) Why are you not closing the streams when writing the keys to the file?
    2) Each block of RSA encrypted data has size equal to the key modulus (in bytes). This means that for a key size of 1024 bits you need to read 128 bytes and not 64 bytes at a time when decrypting ( this is probably the cause of your 'Data must start with zero exception'). Since the input block size depends on the key modulus you cannot hard code this. Note - PKCS1 padding has at least 11 bytes of padding so on encrypting one can process a maximum of the key modulus in bytes less 11. Currently you have hard coded the encryption block at 64 bytes which is OK for your 1024 bits keys but will fail for keys of modulus less than about 936 bits.
    3) int size = dataFIS.available(); is not a reliable way to get the size of an input stream. If you check the Javadoc for InputStream.available() you will see that it returns the number of bytes that can be read without blocking and not the stream size.
    4) InputStream.read(byte[]) does not guarantee to read all the bytes and returns the number of bytes actually read. This means that your code to read the content of the file into an array may fail. Again check the Javadoc. To be safe you should used DataInputStream.readFully() to read a block of bytes.
    5) Reading the whole of the cleartext or ciphertext file into memory does not scale and with very large files you will run out of memory. There is no need to do this since you can use a "read a block, write the transformed block" approach.
    RSA is a very very very slow algorithm and it is not normal to encrypt the whole of a file using it. The standard approach is to perform the encryption of the file content using a symmetric algorithm such as AES using a random session key and use RSA to encrypt the session key. One then writes to the ciphertext file the RSA encrypted session key followed by the symmetric encrypted data. To make it more secure one should actually follow the extended procedure outlined in section 13.6 of Practical Cryptography by Ferguson and Schneier.

Maybe you are looking for

  • Regarding upload from excel to alv.

    Hi here is my code: TABLES TABLES: ioheader,        " IOC Communication structure         ioitem,          " IOC Communication structure         klah,            " Class and Class type         ksml,            " Characteristic Keys for Class and Type

  • No sound in new sequence

    To fix a video resolution problem, I copied all my clips and audio from one sequence into a new sequence.  The old sequence plays back perfectly, with the new sequence has no sound. What have I done wrong? Any help appreciated.

  • Tab canvas active style in web

    I have a form with a tab canvas . The problem is the Active Style property is not working in web.Is there any workaround to highlight the active tab? Thanks in advance Khalid

  • Regarding WD ABAP Tutorial for Adobe forms

    Hello, I'm having difficulty finishing tutorial for WD ABAP interactive forms by Thomas Jung..Questions: 1) Do we have to create an WD application to run it or do we have to test it using the pdf file? in the SDN video its not clear what to do after

  • How can i save my place in an audio book using iphone 4

    how can I save my place in an audio book using iphone 4