Best practices to protect encryption key.

I'm writing a software that uses AES256, what are some good ways to protect the AES key?
This is an unattended software running locked away in a datacenter.

There are many different approaches to this problem; a lot depends on your business, operational, technical and security requirements (what I refer to as BOTS). Given that you have only specified one operational requirement - unattended operations in a data-center - I will assume the rest based on past project experience, regulatory requirements, etc. The approach that we have taken is as follows:
1) After an AES key is generated, we generate a message-digest of it (to store for later verifications);
2) The AES key is encrypted with a 2048-bit RSA Public Key that is specific to an "encryption domain" (a logical grouping of keys, policies, users and authorizations);
3) The PrivateKey of the encryption-domain is encrypted with another 2048-bit RSA PublicKey (called Migration and Storage Key or MASK) for the purpose of migrating RSA keys from one system to another; each MASK is unique to a system;
4) The PrivateKey of the MASK is finally encrypted with a third 2048-bit RSA PublicKey whose PrivateKey is generated and stored inside a cryptographic hardware module - the Trusted Platform Module (TPM) or a Hardware Security Module (HSM);
5) The TPM/HSM require activation by three (3) Key Custodians (KC) before the hardware module will release the PrivateKey to decrypt the MASK's PrivateKey, which will decrypt the encryption-domain's PrivateKey, which decrypts the AES key, which finally decrypts the ciphertext;
6) The PINs of the three Key Custodians are never stored on the system; they are provided by the individuals using a tool - which can be running locally or remotely - over SSL.
7) The PIN is accepted by the system only if it accompanies a digitally-signed random nonce (number-used-once) sent by the system before the PIN is sent by the KC; if the signature fails or it takes longer than the time-out period, the PIN is not accepted;
8) Only after all three PINs are accepted and verified by the system, does the hardware module get activated and the PrivateKey is released to decrypt the chain of keys;
9) A reboot of the system erases all such authentications/authorizations from the system and requires the KCs to activate the hardware module again; however, the KCs can set their PINs on the system from home/hotel/on-the-road as long as they have VPN access to the system and their KC-token (containing their unique RSA keys/certificate for digitally signing the nonce).
While this might seem elaborate, this is necessary to meet PCI-DSS "dual-control, split-knowledge" requirements. The ability to allow KCs to set their PINs remotely is necessary because of unattended data-centers; the hardware module is necessary so that the chain is controlled by a key-pair that cannot be copied or extracted off the machine; all other keys are stored on disk as encrypted ciphertext.
All crypto-systems - SSH, SSL, IPSec, etc. - use a variation of this scheme; we developed this based on the BOTS we heard over the years. If you think that such a complex scheme has got to be awfully expensive, you'll may be in for a shock .
Hope that helps.

Similar Messages

  • What is the best practice to protect coldfusion administrator login page

    Hi all,
    Can someone suggest what is the best practice to protect the administrator login? At the moment, there is only the normal administrator page password to protect. It seems like not very secure especially when the application is on the internet.
    Regards,
    Bubblegum.

    You can protect the page with file system level privs.  Setup a new virtual server that maps to a seperate copy of /cfide (and remove /admin and /adminapi from the other cfide folder your internet sites use).  Limit what IP addresses can hit /cfide.
    We run multiple instances, so we connect directly to each instance to manage it.  And those ports aren't accessable on the internet.  To top it off, we have an ISAPI ReWrite rule that sends a 404 if you try /cfide/administrator or adminapi.
    If you're using CF8, you can set it up so it requires a specific username instead of a generic name.

  • Exchange 2010 - What is best practice for protection against corruption replication?

    My Exchange 2010 SP3 environment includes DAG with offsite passive copy.  DB is backed-up nightly with TSM TDP.  My predecessor also installed DoubleTake software to protect the DB against replication of malware or corruption to the passive MB
    server.  Doubletake updates offsite DB replica every 4-hours.  Understanding that this is ultimately a decision based on my company's risk tolerance, to the end, what is the probability of malware or corruption propagation due to replication? 
    What is industry best practice: do most companies have a 3rd, lagged copy of the DB in the DAG, or are 3rd party solutions such as DoubleTake commonly employed?  Are there other, better (and less expensive) options?

    Correct. If 8 days lagged copy is maintained then daily transaction log files of 8 days are preserved before replaying them to lagged database. This will ensure point-in-time recovery, as you can select log files that you need to replay into the database.
    Logs will get truncated if they have been successfully replayed into database and have expired their lagged time-stamp.
    Each database copy has a checkpoint file (.chk), which keeps track of transaction log files status.
    Command to check the Transaction Logs replay status:
    eseutil /mk <path-of-the-chk-file>  - (stored with the Transaction log files)
    - Sarvesh Goel - Enterprise Messaging Administrator

  • Best Practice to use one Key on ACE for new CSR?

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

  • What is the best practice for creating primary key on fact table?

    what is the best practice for primary key on fact table?
    1. Using composite key
    2. Create a surrogate key
    3. No primary key
    In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
    I also found a relevant thread states that primary key on fact table is necessary.
    Primary Key on Fact Table.
    But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • Best practice question: static constant keys

    Hi- quick one here... just curious, I need a constant defined that I can use as a key into various NSDictionary instances and for other purposes also.
    I tried defining this in my header file:
    static NSString const *KEY_ID = @"ID";
    But passing KEY_ID as the key in dictionary's setValue:forKey: function causes a warning about pointer types.
    Basically, I need the equivalent of ...public static final String KEY_ID = "ID".... in Java. Any thoughts?

    If you look at your headers, Apple's approach is generally:
    extern NSString * const KEY_ID;
    With a followup declaration/assignment in a relevant .m file.

  • Best practices for protecting files from ransomware?

    If you don't know what CryptoWall and such ransomware is, you are lucky. For now.
    This os probably more of a Desktop security issue but I'd like some ideas for file server protection.
    A corporate office got lucky today with just the files on one PC infected and network file shares the user had access to lost - but they were backed up, hence the "lucky".
    But it was scary enough they want to know what Microsoft wants us to do to prevent this in the future. The user was not admin on the local machine and so we are not sure how it was installed (I've read people get it different ways).
    We have SCCM EndPoint protection and obviously it didn't help. It did actually stop a password stealing utility from installing around the same time but didn't stop us from having thousands of files rendered useless for many hours today.
    It was suggested not using mapped network drives but I think one share was hit without a mapping (still waiting for confirmation). But I think anywhere it finds it, ie., under Favorites, could be attacked.
    Suggestions please.
    Thank you!

    You can try this.
    http://www.thirdtier.net/2013/10/cryptolocker-prevention-kit/

  • Best practices with sequences and primary keys

    We have a table of system logs that has a column called created_date. We also have a UI that displays these logs ordered by created_date. Sometimes, two rows have the exact same created_date down to the millisecond and are displayed in the UI in the wrong order. The suggestion was to order by primary key instead since the application uses an oracle sequence to insert records so the order of the primary key will be chronological. I felt this may be a bad idea as a best practice since the primary key should not be used to guarantee chronological order although in this particular application's case, since it is not a multi-threaded environment, it will work so we are proceeding with it.
    The value for the created_date is NOT set at the database level (as sysdate) but rather by the application when it creates the object which is persisted by Hibernate. In a multi-threaded environment, thread A could create the object and then get blocked by thread B which is able to create the object and persist it with key N after which control returns to thread A it persists it with key N+1. In this scenario thread A has an earlier timestamp but a larger key so will be ordered before thread B which is in error.
    I like to think of primary keys as solely something to be used for referential purposes at the database level rather than inferring application level meaning (like the larger the key the more recent the record etc.). What do you guys think? Am I being too rigorous in my views here? Or perhaps I am even mistaken in how I interpret this?

    >
    I think the chronological order of records should be using a timestamp (i.e. "order by created_date desc" etc.)
    >
    Not that old MYTH again! That has been busted so many times it's hard to believe anyone still wants to try to do that.
    Times are in chronological order: t1 is earlier (SYSDATE-wise) than t2 which is earlier than t3, etc.
    1. at time t1 session 1 does an insert of ONE record and provides SYSDATE in the INSERT statement (or using a trigger).
    2. at time t3 session 2 does an insert of ONE record and provides SYSDATE
    (which now has a value LATER than the value used by session 1) in the INSERT statement.
    3. at time t5 session 2 COMMITs.
    4. at time t7 session 1 COMMITs.
    Tell us: which row was added FIRST?
    If you extract data at time t4 you won't see ANY of those rows above since none were committed.
    If you extract data at time t6 you will only see session 2 rows that were committed at time t5.
    For example if you extract data at 2:01pm for the period 1pm thru 1:59pm and session 1 does an INSERT at 1:55pm but does not COMMIT until 2:05pm your extract will NOT include that data.
    Even worse - your next extract wll pull data for 2pm thru 2:59pm and that extract will NOT include that data either since the SYSDATE value in the rows are 1:55pm.
    The crux of the problem is that the SYSDATE value stored in the row is determined BEFORE the row is committed but the only values that can be queried are the ones that exist AFTER the row is committed.
    About the best you, the user (i.e. not ORACLE the superuser), can do is to
    1. create the table with ROWDEPENDENCIES
    2. force delayed-block cleanout prior to selecting data
    3. use ORA_ROWSCN to determine the order that rows were inserted or modified
    As luck would have it there is a thread discussing just that in the Database - General forum here:
    ORA_ROWSCN keeps increasing without any DML

  • Best practice for PK and indexes?

    Dear All,
    What is the best practice for making Primary Key and indexes? Should we keep them in the same tablespace as table or should we create a seperate tableapce for all indexes and Primary Key? Please note I am talking about a table that has 21milion rows at the moment and increasing 10k to 20k rows daily. This table is also heavily involved in daily reports and causing slow performance. Currently the complete table with all associated objects such as indexes and PK is stored in one seperate tablespace. If my way is right then please advise me how can I improve the performance of retrival or DML operation on this table?
    Thanks in advance..
    Zia Shareef

    Well, thanks for valueable advices... I am using Oracle 8i and let me tell you exact problem...
    My billing database has two major tables having almost 21 millions rows each... one has collection data and other one for invoices... many reports are showing the data with the joining of Customer + Collection + Invoices tables.
    There are 5 common fields in between invoices(reading) and collection tables
    YEAR, MONTH, AREA_CODE, CONS_CODE, BILL_TYPE(adtl)
    My one of batch process has following update and it is VERY VERY SLOW:
    UPDATE reading r
    SET bamount (SELECT sum(camount)
    FROM collection cl
    WHERE r.ryear = cl.byear
    AND r.rmonth = cl.bmonth
    AND r.area_code = cl.area_code
    AND r.cons_code = cl.cons_code
    AND r.adtl = cl.adtl)
    WHERE area_code = 1
    tentatively area_code(1) is having 20,000 consumers
    each consuemr may have 72 invoices and against these invoices it may have 200 rows in collection tables (system have provision to record partial payment against one invoice)
    NOTE: Please note presently my process is based on cursors so the above query runs for one consumer at one time but just for giving an idea I have made it for whole area.
    Mr. Yingkuan, can you please tell me how can I check that the table' statistics is not current and how can I make it current. Is it really effect performance?

  • What is best practices

    hi gurus,
    I would like know what is best practices.  Where we will use this best practices.  What are the benefits we will get through best practices. For whcih industry it is useful.
    If any one help me in this following subject .  I will be value added for me.
    Thanks in advance

    Dear nag
    SAP Best Practices facilitate a speedy and cost-efficient implementation of SAP Software with a minimal need for planning and resources. SAP Best Practices are suited to the enterprise requirements of different industries. They integrate well with varying financial accounting and human resource management systems and can be used by enterprises of any size.
    => SAP Best Practices are a central component of the second phase of ValueSAP (Implementation). The ValueSAP framework guarantees value throughout the entire life cycle of SAP Software.
    => SAP Best Practices are a cornerstone of mySAP.com, since all the key elements of mySAP.com are linked to SAP Best Practices through preconfiguration. Key elements include:
              <> Preconfigured collaborative business scenarios
              <> Preconfigured mySAP.com Workplaces
              <> Preconfigured access to electronic marketplaces
              <> Preconfigured employee self-services
    Features
    SAP Best Practices consist of:
    An industry-specific version of AcceleratedSAP (ASAP) including various tools such as the Implementation Assistant, the Question & Answer database (Q&Adb), detailed documentation of business processes and accelerators:
    The industry-specific version of ASAP provides extensive business knowledge in a clear, structured format, which is geared towards the needs of your enterprise. You can use the reference structures with industry-specific end-to-end business processes, checklists and questions to create a Business Blueprint. This document contains a detailed description of your enterprise requirements and forms the basis for a rapid and efficient implementation of an SAP System.
    Preconfigured systems providing you with the industry-specific and/or country-specific Customizing settings you need to effectively implement business processes relevant to your enterprise
    Key elements include:
    -> Tried and tested configuration settings for the critical processes in your industry including special functions that are standard across all the preconfigured systems
    -> Documentation on configuration, which provides project team members with a comprehensive overview of the system settings
    -> Master data, which you can easily change or extend, for example, organizational structures, job roles, and customer/vendor master records
    -> Test catalogs that can be used to replay test processes in training courses, for example, to help you gain a better understanding of how processes work in the system
    Thanks
    G. Lakshmipathi

  • Any best practice for Key Management with Oracle Obfuscation?

    Hi,
    I was wondering if anyone is aware if there are any best practices regarding key management when using Oracle's DBMS_OBFUSCATION_TOOLKIT? I'm particularly interested in how we can protect the encryption/decryption key that we would use.
    Thanks,
    Jim

    Oracle offers this document, which includes a strategy for what you're after:
    http://download-west.oracle.com/docs/cd/B13789_01/network.101/b10773/apdvncrp.htm#1006234
    -Chuck

  • Best Practice for IKE keys

    Folks,
    I am configuring my first site-to-site vpn using IPsec and IKE; however, I wanted to know if I should watch out for anything and the best practices for IKE.
    I have generated a phrase that is 30 characters long, but should I include “special characters” in my IKE key?

    Rather than the key length and 'strength' I'd focus on keeping a copy documented / stored securely offline somewhere. Process and documentation are at least as important as the technology.
    99% of your protection comes from using a VPN at all as opposed to the characters used in your PSK.
    If it's an option (e.g ASA 8.4 at both ends) I'd recommend using IKEv2.

  • Best practice for encrypting data in CRM 2013 (other than the fields it already encrypts)

    I know CRM 2013 can encrypt some values by default, but if I want to store data in custom fields then encrypt that, what's the best practice?  I'm working on a project to do this through a javascript action that when triggered from a form would reference
    a web service to decrypt values and a plugin to encrypt on Update/Create, but I hoped there might be a simpler or more suggested way to do this.
    Thanks.

    At what level are you encrypting?  CRM 2013 supports encrypted databases if you're worried about the data at rest.
    In transit, you should be using SSL to encrypt the entire process, not just individual data.
    you can use field-level security to not display certain fields to end users of a certain type if you're worried about that.  It's even more secure than anything you could do with JS, as the data is never passed over the wire.
    Is there something those don't solve?
    The postings on this site are solely my own and do not represent or constitute Hitachi Solutions' positions, views, strategies or opinions.

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best practice to run Microsoft Endpoint Protection client in VDI environment

    We are using Citrix XenDesktop VDI environment. Symantec Endpoint Protection client (VDI performance optimised) has been installed on the “streamed to the clients” virtual machine image. Basically, all the files (in golden image) have been “tattooed” with
    Symantec signature. Now, when the new VM starts, Symantec scan engine simply ignores “tattooed” files and also randomise scan times. This is a rough explanations but I hope you’ve got the idea.
    We are switching from Symantec to Microsoft Endpoint Protection and I’m looking for any information and documentation in regards best practice for running Microsoft Endpoint Protection clients in VDI environment.
     Thanks in advance.

    I see this post is a bt old but the organization I'm with has a very large VDI deployment using VMware. We also are using SCEP 2012 for the AV.
    Did you find out what you were looking for or did you elect to take a different direction?
    We install SCEP 2012 into the base image and manage the settings using GPO and the updates for defs are through the normal route.
    Our biggest challenge is getting alert message from the client.
    Thanks

Maybe you are looking for