Best practices for protecting files from ransomware?

If you don't know what CryptoWall and such ransomware is, you are lucky. For now.
This os probably more of a Desktop security issue but I'd like some ideas for file server protection.
A corporate office got lucky today with just the files on one PC infected and network file shares the user had access to lost - but they were backed up, hence the "lucky".
But it was scary enough they want to know what Microsoft wants us to do to prevent this in the future. The user was not admin on the local machine and so we are not sure how it was installed (I've read people get it different ways).
We have SCCM EndPoint protection and obviously it didn't help. It did actually stop a password stealing utility from installing around the same time but didn't stop us from having thousands of files rendered useless for many hours today.
It was suggested not using mapped network drives but I think one share was hit without a mapping (still waiting for confirmation). But I think anywhere it finds it, ie., under Favorites, could be attacked.
Suggestions please.
Thank you!

You can try this.
http://www.thirdtier.net/2013/10/cryptolocker-prevention-kit/

Similar Messages

  • Best practice for linking fields from multiple entity objects

    I am currently transitioning from PHP to ADF. I'm looking for the best practice for linking data from multiple entity objects.
    Example:
    EO 'REQUESTS' has fields: req_id, name, dt, his_stat_id, her_stat_id
    EO 'STATUSES' has fields: stat_id, short_txt_descr
    'REQUESTS' is linked to EO 'STATUSES' on: STATUSES.stat_id = REQUESTS.his_status_id
    'REQUESTS' is also linked to EO 'STATUSES' on: STATUSES.stat_id = REQUESTS.her_status_id
    REQUESTS.his_status_id is independent of REQUESTS.her_status_id
    When I create a VO for REQUESTS, I want to display: REQUESTS.name, REQUESTS.dt, STATUSES.short_txt_descr (for his_stat_id), STATUS.short_txt_descr (for her_stat_id)
    What is the best practice for accomplishing this? It appears I could do it a few different ways:
    1. Create the REQUESTS VO with a LOV for his_stat_id and her_stat_id
    2. Create the REQUESTS VO with the join to STATUSES performed within the query for the VO. This would require joining on the STATUSES EO twice (his_stat_id, her_stat_id)
    3. I just started reading about View Links - would that somehow do what I'm looking for?
    I also need to be able to update his_status_id and her_status_id through the by selecting a STATUSES.short_txt_descr from a dropdown.
    Any suggestions on how to approach such a stupidly simple task?
    Using jDeveloper 11.1.2.2.0 if that makes a difference in the solution.
    Thanks ahead of time,
    CJ

    CJ,
    I vote for solution 1 as it's just your use case. As you said you what to update the his_status_id and her_status_id through the by selecting a STATUSES.short_txt_descr by a drop down. This is exactly the LOV solution.
    ViewLinks are used fro master detail navigation (which you don't do here) and Joining the data make it difficult to update (and you still need a LOV for the drop down box.
    Timo

  • Best Practice for Removing Zeroes from Database

    Does anyone have some clever bits of code or best practices for evaluating a database and instances of zeroes? I'm working on cleaning up our rules file and am thinking the best way to start would be to write some code to look for zeroes and write them to a log file. This would at least indicate if there was even a problem with zeroes (which there may or may not be).
    Any suggestions out there / utilities / code samples?
    Thanks.

    We accomplished this using data extracts from a subset of scenarios/years/entities/accounts to ensure that all of our potential rules could be checked to ensure they were not writting zero's. This worked pretty well for our purposes, a text editor called EmEditor allows for VB macros in it pretty easily and we could write a quick macro to check for strings ending in "; 0." You may also want to review your check box of calculated in your extract and see if the zeros are a result of calculations. A rule output could work pretty well, although it would take some defining as you would have to write it out in a sub and make sure that you capture the data of all subroutines if your zero's are rule driven or actual inputs. May want to review some if you have very small insignificant values getting written, seen items that have one value 13 places to the right of the decimal that were not really signficant.
    JTF

  • Best Practices for Removing Shots from BDMV folder

    CS6 Production Premium Suite
    Win7x64
    Canon XA10
    I would appreciate feedback on the best practices for the following situation:
    Using Windows Explorer, I copy the BDMV folder from the XA10 to my Talk2 project folder.
    The BDMV folder has three one hour shots (talks) where each shot is one hour called Talk1, Talk2, and Talk3.
    Each shot consists of several MTS files in the STREAM folder since MTS files have a maximum file size so a new MTS file is created when a given MTS file reaches the maximum file size.
    Since I only want to have Talk2 stuff in my Talk2 project folder, I need to remove the Talk1 and Talk3 stuff from the BDMV folder.
    I delete the Talk1 and Talk3 MTS files from the STREAM folder.
    I delete the Talk1 and Talk3 CPI files from the CLIPINF folder.
    I leave the PLAYLIST folder as is.
    Using the Media Browser, I import Talk2 (which consists of two MTS files).
    I edit the clip.
    This procedure seems to work, but I do not know if there are any "got you" issues.
    Thanks in advance.

    Oh, don't do it that way.  I know a lot of people do, heck, my boss does, but it's just asking for trouble.
    Treat your card as if it was your original tape master (because it is).  It is the most important thing you have.  Don't delete or move any part of it. 
    If you want to break up the talks, do it as you shoot them.  Use separate cards for each talk and archive each one separately.  There is too much valuable information in the structure of the card format.  You may not need it now but your editing program may need it later.
    Hard drive space is cheap but digital recordings are priceless

  • Best practices for logging results from Looped steps

    Hi all
    I would like to start a discussion  to document best practices for logging results (to reports and databases) from Looped Steps 
    As an application example - let's say you are developing a test for one of NI's analog input or output cards and need to measure a voltage across multiple inputs or outputs.
    One way to do that would be to create a sequence that switches the appropriate signals and performs a "Voltage Measurement" test in a loop.    
    What are your techniques for keeping track of the individual measurements so that they can be traced to the individual signal paths that are being measured?
    I have used a variety of techniques such as
    i )creating a custom step type that generates unique identifiers for each iteration of the loop.    This required some customization to the results processing . Also the sequence developer had to include code to ensure that a unique identifier was generated for each iteration
    ii) Adding an input parameter to the test function/vi, passing loop iteration to it and adding this to Additional results parameters to log.   

    I have attached a simple example (LV 2012 and TS 2012) that includes steps inside a loop structure as well as a looped test.
    If you enable both database and report generation, you will see the following:
    1)  The numeric limit test in the for loop always generates the same name in the report and database which makes it difficult to determine the result of a particular iteration
    2) The Max voltage test report includes the paramater as an additional result but the database does not include any differentiating information
    3) The Looped Limit test generates both uniques reports and database entries - you can easily see what the result for each iteration is.   
    As mentioned, I am seeking to start a discussion for how others handle results for steps inside loops.    The only way I have been able to accomplish a result similar to that of the Looped step (unique results and database entry for each iteration of the loop) is to modify the process model results processing.  
    Attachments:
    test.vi ‏27 KB
    Sequence File 2.seq ‏9 KB

  • Best Practice for Flat File Data Uploaded by Users

    Hi,
    I have the following scenario:
    1.     Users would like to upload data from flat file and subsequently view their reports.
    2.     SAP BW support team would not be involved in data upload process.
    3.     Users would not go to RSA1 and use InfoPackages & DTPs. Hence, another mechanism for data upload is required.
    4.     Users consists of two group, external and internal users. External users would not have access to SAP system. However, access via a portal is acceptable.
    What are the best practice we should adopt for this scenario?
    Thanks!

    Hi,
    I can share what we do in our project.
    We get the files from the WEB to the Application Server in path which is for this process.The file placed in the server has a naming convention based on ur project,u can name it.Everyday the same name file is placed in the server with different data.The path in the infopackage is fixed to that location in the server.After this the process chain trigers and loads the data from that particular  path which is fixed in the application server.After the load completes,a copy of file is taken as back up and deleted from that path.
    So this happens everyday.
    Rgds
    SVU123
    Edited by: svu123 on Mar 25, 2011 5:46 AM

  • EP Upgrade - SP14 - Best Practice for Modification File Comparison

    SDN  Experts -
    We are upgrading our EP from SP14 - SP16.  SAP offers a file "diff" tool that is only useful for Java application files to assist in re-applying our mods on top of the new code stack.
    We are looking for best practices in Portal upgrades to do the following:
    - Identify all files that we have modified on existing SP
    - Diff all source code files (java, XML, GUI, other) between Current SP14 and SP16
    We are also looking for documentation that identifies the local directory structure for NWDS.  This would aid us in creating a batch process to "diff" our source code libraries.
    Any recommendations are appreciated.
    Thanks

    I'm not realy getting your question because you already state what to do:
    We are looking for best practices in Portal upgrades to do the following:
    Identify all files that we have modified on existing SP
    Diff all source code files (java, XML, GUI, other) between Current SP14 and SP16
    You should know by documentation what is changed I guess? Then start diff-ing the code and recompile or repackage. NWDS also has diff functionalities.
    Good luck,
    Benjamin

  • Best Practices for Exporting Files??

    I'm new to Premiere (coming from FCP).  I used Premiere months ago to compress some ProRes files to h.264 files for the web.  I sent the files through Media Encoder and everything seemed fine.  However, I realized after several weeks that the audio in all of the files was a few frames out of sync.  Having not been a Premiere user at the time I did not do much research and decided to just use MPEG Streamclip from then on.
    Now that I'm learning how to use Premiere, I looked up the issue on the forums and found that many people have had similar issues with the audio being out of sync after exporting. However, there are tons of different scenerios in which it seems to be occuring.  The one common variable that I've noticed (among many of the threads, but not all) is that many of the people are exporting to a Quicktime format. 
    While I don't remember all the details of my export and sequence settings from my issue months ago (so I don't want to address that specific case), I am curious as to what are some "Best Practices" when exporting from Premiere Pro? Is there any advantage/disadvantage to use AME rather than exporting directly from Premiere Pro? In general, I will just be exporting as H.264 files for the web, MPEG-2 for DVD, and ProRes 422 for After Effects (or sometimes to bring into MPEG Streamclip). 
    I shoot almost entirely in AVCHD, and usually at 1080p 30fps.  I'm running CS5 on a Macbook Pro 15" 2.0 Quad Core i7 8GB RAM.
    While the question may seem broad, my main concern that I want to avoid is having the audio out of sync.  But also I just want to know of any important details to keep in mind to prevent other issues.
    Thanks,
    Mike

    > I'm running CS5...
    What specific version? We're up to 5.0.4 now.
    There have been bug fixes for audio/video synch in the updates. One of the fixes was for a bug in the conforming of audio and indexing of MPEG files, so you need to delete your media cache files and let Premiere Pro create new ones for this fix to take effect.

  • Best practice for workflow triggering from Web Dynpro UI

    Hello, workflow community!
    I'm working on a task which allow to trigger the workflow by clicking a button in Web Dynpro UI. As always, there are multiple ways to do that, for instance, to use SAP Workflow API (SAP_WAPI_START_WORKFLOW) or to raise an event upon the button click, which will be caught by workflow template.
    In my opinion, the optimal solution is to call FM, which will call ABAP-class, raising an event, which, in turn, will be caught by workflow template. In this case, FM will service kind of wrapper, where we can implement some additional checks if needed.
    But the question is what approach is the best practice — to raise an event or use SAP_WAPI_*?
    Thanks.

    let combine, use SAP_WAPI_CREATE_EVENT
    usually I would not recommend creating a workflow directly (SAP_WAPI_START_WORKFLOW) since when I look for workflows in a system I usually start from SWE2 (the event linkage) which uses events so the workflow you start by SAP_WAPI_START_WORKFLOW will not be seen there, also SWE2 gives you better control for starting the workflow, you can easily deactivate an event linkage. finding where you called the workflow by SAP_WAPI_START_WORKFLOW will be more difficult and deactivation will require a code change.
    so use events, and start them by SAP_WAPI_CREATE_EVENT
    Also pay attention that you have a check function module option in SWE2.

  • Exchange 2010 - What is best practice for protection against corruption replication?

    My Exchange 2010 SP3 environment includes DAG with offsite passive copy.  DB is backed-up nightly with TSM TDP.  My predecessor also installed DoubleTake software to protect the DB against replication of malware or corruption to the passive MB
    server.  Doubletake updates offsite DB replica every 4-hours.  Understanding that this is ultimately a decision based on my company's risk tolerance, to the end, what is the probability of malware or corruption propagation due to replication? 
    What is industry best practice: do most companies have a 3rd, lagged copy of the DB in the DAG, or are 3rd party solutions such as DoubleTake commonly employed?  Are there other, better (and less expensive) options?

    Correct. If 8 days lagged copy is maintained then daily transaction log files of 8 days are preserved before replaying them to lagged database. This will ensure point-in-time recovery, as you can select log files that you need to replay into the database.
    Logs will get truncated if they have been successfully replayed into database and have expired their lagged time-stamp.
    Each database copy has a checkpoint file (.chk), which keeps track of transaction log files status.
    Command to check the Transaction Logs replay status:
    eseutil /mk <path-of-the-chk-file>  - (stored with the Transaction log files)
    - Sarvesh Goel - Enterprise Messaging Administrator

  • Best Practice for EJB calls from servlet?

    Hi folks
    I could not find general rules for making calls to an stateful EJB from the web container (e.g. from a backingBean). In some books they say it is a bad programming style calling them directly from a common servlet. The book says create first an HTTPSession Object and from there call the stateful EJB.
    I'm a bit confused because, I'm missing some best practice guide from where to initiate such calls.
    Can somebody please point me in the right direction?
    Kind Regards
    Bruno
    Edited by: zajoho on Oct 30, 2008 11:14 PM

    Hi Bruno,
    The main issue with the combination of stateful session beans and servlets is the servlet threading model.
    It is dangerous to store a stateful session bean reference in servlet instance state, since the servlet instance
    can be accessed concurrently, yet a stateful session bean reference is intended to be used by only one
    client.
    As you point out, one alternative is to store the reference in the HttpSession. That associates the reference
    with a particular client, which matches the stateful session bean programming model.

  • What is the best practice to migrate files from one profile to another?

    I am increasing having to move from one created local profile machine to an active directory joined mobile account.  What is the best way to migrate the files from one profile to another?

    Some answers below:
    1. Yes, you should use snapshots (right click on t he object and select create snapshot or on the project menu, select Change Manager for a centralized snapshot management).
    2. See above.
    3. Usually you will export the (test/development) design repository to an mdl file, and import it into the other (production) design directory. Then you will deploy the code from the other design directory into the production target (runtime).
    2.1 From 9.2 on, you will have to deploy to file and then use OWB scripting (OMBPlus) to deploy the generated file outside of the deployment manager.
    2.2 Not sure what is asked in this question - can you rephrase?
    2.3 No, but the creation and use of thes will not be managed and audited by OWB (Runtime Audit Browser)
    3.(bis) The target owner.
    Regards:
    Igor

  • Best Practices for configuring ICMP from the outside

    Question,
    Are there any best practices or best recommendations on how ICMP should be configured from the outside? I have been cleaning up the rules on our ASA as a lot were simply ported over years ago when we retired our PIX. I noticed that there is a rule to allow ICMP any any and began to wonder how this works when the rules above are specific IP addresses and specific ports. This in thurn started me looking to see if there was any documentation or anything to help me determine a best practice. Anyone know of anything?
    As a second part how does this flow on a firewall if all the addresses are natted? It the ICMP traffic simply passed through the NAT and the destiantion simply responds?
    Brent                   

    Here you go, bro!
    http://checkthenetwork.com/networksecurity%20Cisco%20ASA%20Firewall%20Best%20Practices%20for%20Firewall%20Deployment%201.asp#_Toc218778855
    access-list inside permit icmp any any echo
    access-list inside permit icmp any any echo-reply
    access-list inside permit icmp any any unreachable
    access-list inside permit icmp any any time-exceeded
    access-list inside permit icmp any any packets-too-big
    access-list inside permit udp any any eq 33434 33464
    access-list deny icmp any any log
    P/S: if you think this comment is useful, please do rate them nicely :-)

  • Best Practice for Migrating code from Dev to a fresh Test ODI instance

    Dear All,
    This is Priya.
    We are using ODI 11.1.1.6 version.
    In my ODI project, we have separate installations for Dev, Test and Prod. i.e. Master repositories are not common between all the three. Now my code is ready in dev. Test environment is just installed with ODI and Master and Work repositories are created. Thats it
    Now, I need to know and understand what is the simple & best way to import the code from Dev and migrate it to test environment. Can some one brief the same as a step by step procedure in 5-6 lines?
    Some questions on current state.
    1. Do the id's of master and work repositories in Dev and Test need to be the same?
    2. I usually see in export file a repository id with 999 and fail to understand what it is exactly. None of my master or work repositories are named with that id.
    3. Logical Architecture objects and context do not have an export option. What is the suitable alternative for this?
    Thanks,
    Priya
    Edited by: 948115 on Jul 23, 2012 6:19 AM

    948115 wrote:
    Dear All,
    This is Priya.
    We are using ODI 11.1.1.6 version.
    In my ODI project, we have separate installations for Dev, Test and Prod. i.e. Master repositories are not common between all the three. Now my code is ready in dev. Test environment is just installed with ODI and Master and Work repositories are created. Thats it
    Now, I need to know and understand what is the simple & best way to import the code from Dev and migrate it to test environment. Can some one brief the same as a step by step procedure in 5-6 lines? If this is the 1st time you are moving to QA, better export/import complete work repositories. If it is not the 1st time then create scenario of specific packages and export/import them to QA. In case of scenario you need not to bother about model/datastores. keep in mind that the logical schema name should be same in QA as used in your DEV.
    Some questions on current state.
    1. Do the id's of master and work repositories in Dev and Test need to be the same?It should be different.
    2. I usually see in export file a repository id with 999 and fail to understand what it is exactly. None of my master or work repositories are named with that id.It is required to ensure object uniqueness across several work repositories. For more understanding you can refer
    http://docs.oracle.com/cd/E14571_01/integrate.1111/e12643/export_import.htm
    http://odiexperts.com/odi-internal-id/
    3. Logical Architecture objects and context do not have an export option. What is the suitable alternative for this?If you are exporting topology then you will get the logical connection and context details. If you are not exporting topology then you need to manually create context and other physical connection/logical connection.
    >
    Thanks,
    Priya
    Edited by: 948115 on Jul 23, 2012 6:19 AM

  • Best Practice For Secure File Sharing?

    I'm a newbie to both OX X Server and File Sharing protocols, so please excuse my ignorance...
    My client would like to share folders in the most secure way possible; I was considering that what might be the best way would be for them to VPN into the server and then view the files through the VPN tunnel; my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN (i.e. from inside of the internal network)... I don't see any options in Server Admin to restrict users in that way....
    I'm not afraid of the command line, FYI, I just don't know if this is:
    1. Possible!
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    Thanks for any suggestions!

    my only issue with this is that I have no idea how to open up File Sharing to ONLY allow users who are connecting from the VPN
    Simple - don't expose your server to the outside world.
    As long as you're running on a NAT network behind some firewall or router that's filtering traffic, no external traffic can get to your server unless you setup port forwarding - this is the method used to run, say, a public web server where you tell the router/firewall to allow incoming traffic on port 80 to get to your server.
    If you don't setup any port forwarding, no external traffic can get in.
    There are additional steps you can take - such as running the software firewall built into Mac OS X to tell it to only accept network connections from the local network, but that's not necessary in most cases.
    And 2. The best way to ensure secure AND encrypted file sharing via the server...
    VPN should take care of most of your concerns - at least as far as the file server is concerned. I'd be more worried about what happens to the files once they leave the network - for example have you ensured that the remote user's local system is sufficiently secured so that no one can get the documents off his machine once they're downloaded?

Maybe you are looking for