Best Practice to Mothball Library against future need

we have a StorageTek L20 library with an Ultrium 3 drive
We've basically stopped using it, because we're using Backup-to-Disk now, but of course we have a lot of history on tape, which we may need to restore some time in the future.
What's the best thing to do with the Drive & the Library?
Should we:
power the whole thing down & store in a cool place in an airtight bag with desiccant
leave it all powered up so it's dry
leave it all powered up & do a test restore from time to time
the chances are that we'll never need it, however one can never be sure.
and - should we - if we can - keep it on Maintenance?
Dave

Theres several aspects to this.
Large lists:
http://technet.microsoft.com/en-gb/library/cc262813%28v=office.14%29.aspx
A blog summarising large databases here:
http://blogs.msdn.com/b/pandrew/archive/2011/07/08/articles-about-scaling-sharepoint-to-large-content-database-capacity.aspx
Boundaries and limits:
http://technet.microsoft.com/en-us/library/cc262787%28v=office.14%29.aspx#ContentDB
If at all possible make your web service clever enough to split content over multiple site collections to allow you to have smaller individual databases.
It can be done but you need to do a lot of reading on this to do it well. You'll also need a good DBA team to maintain the environment.

Similar Messages

  • Best Practice Building Block Library not accessible in Firefox

    Hello SAP Documentation Team,
    I've just get aware of the <a href="http://help.sap.com/bp_bblibrary/500/BBlibrary_start.htm">Best Practice Building Block Library</a>. Unfortunately it can't be used with Firefox 1.5.0.4 because of a script error. I see the dropdown lists but when I select for example Country "Germany" nothing happens. In IE it works perfect.
    Regards
    Gregor

    Hope that this will change with later Best Practice releases.

  • Best Practices for Keeping Library Size Minimal

    My library file sizes are getting out of control. I have multiple library's that are now over 3TB in size!
    My question is, what are the best practices in keeping these to a manageable size?
    I am using FCPX 10.2. I have three camera's (2x Sony Handycam PJ540 and 1x GoPro Hero 4 Black).
    When I import my PJ540 videos they are imported as compressed mp4 files. I have always chosen to transcode the videos when I import them. Obviously this is why my library sizes are getting out of hand. How do people deal with this? Do they simply import the videos and choose not to transcode them and only transcode them when they need to work on them? Do they leave the files compressed and work with them that way and then transcoding happens when exporting your final project?
    How do you deal with this?
    As for getting my existing library sizes down, should I just "show package contents" of my library's and start deleting the transcoded versions or is there a safer way to do this within FCPX?
    Thank you in advance for you help.

    No. Video isn't compressed like compressing a document. When you compress a document you're just packing the data more tightly. When you compress video you do it basically by throwing information away. Once a video file is compressed, and all video files are heavily compressed in the camera, that's not recoverable. That information is gone. The best you can do is make it into a format that while not deteriorate further as the file is recompressed. Every time you compress a file, especially to heavily compressed formats, more and more information is discarded. The more you do this the worse the image gets. Transcoding converts the media to a high resolution, high data rate format that can be manipulated considerably without loss, and go through multiple generations without loss. You can't go to second generation H.264 MPEG-4 without discernible loss in quality.

  • "Best Practice" for a stored procedure that needs to access two schemas?

    Greetings all,
    When my company's application is deployed, two schema owners are typically created and all database objects divided between the two. I'll call them FIRST and SECOND.
    In a standard, vanilla implementation there is never any reason for the two to "talk to each other". No rights to objects in one schema are ever granted to the other.
    I am currently charged, however, with writing custom code to roll up data from one of the schemas and update tables in the other with the rollups. I have created a user whose job it is to run this process, and this user has the proper permissions to all necessary objects in both schemas. I'll call this user MRBATCH.
    Typically, any custom objects, whether they be additional staging tables, temp tables or stored procedures are saved in the FIRST schema. I tried to save this new stored procedure in the FIRST schema and compile it, but got "Insufficient priviliges" errors whenever the code in the stored procedure tried to access any tables in the SECOND schema. This surprised me a little bit because I had no plans to actually EXECUTE the stored procedure as FIRST, but I guess I can understand it from the point of view of, you ought be able to execute something you own.
    So which would be be "better" (assuming there's any difference): Grant FIRST all of the rights it needs in SECOND and save the stored procedure in FIRST, or could I just save the stored procedure in the MRBATCH schema? I'm not sure which would be "better practice".
    Is there a third option I'm overlooking perhaps?
    Thanks
    Joe

    In this case I would put it again into schema THIRD. This is a kind of API schema. There are procedures in it that allow some customized functionality. And since you grant only the right to execute those procedures (should be packages of cause) you won't get into any conflicts about allowing somebody too much.
    Note that this suggestion seems very similiar to putting the procedure directly to the executing user MRBATCH. It depends how this schemauser is used. I always prefer separating users from schemas.
    By definition the oracle object to represent a schema is identical to the oracle object representing a user (exception: externally defined users).
    my definition is:
    Schema => has objects (tables, packages) and uses tables space
    User => has priviledges (including create session and connect) and uses temp tablespace only. Might have synonyms and views.
    You can mix both, but sometimes it makes much sense to separate one from the other.
    Edited by: Sven W. on Aug 13, 2009 9:51 AM

  • Best practice on mailbox database size & we need how many server for deployment exchange server 2013

    Dear all,
    We have  an server that runs Microsoft exchange server 2007 with the following specification:
    4 servers: Hub&CAS1 & Hub&CAS2 & Mailbox1 & Mailbox2 
    Operating System : Microsoft Windows Server 2003 R2 Enterprise x64
    6 mailbox databases
    1500 Mailboxes
    We need to upgrade our exchange server from 2007 to 2013 to fulfill the following requirment:
    I want to upgrade the exchange server 2007 to exchange server 2013 and implement the following details:
    1500 mailboxes
    10GB or 15GB mailbox quota for each user
    How many
    servers and databases  are
    required for this migration<ins cite="mailto:Mohammad%20Ashouri" datetime="2014-05-18T22:41"></ins>?
    Number of the servers:
    Number of the databases:
    Size of each database:
    Many thanks.

    You will also need to check server role requirement in exchange 2013. Please go through this link to calculate the server role requirement : http://blogs.technet.com/b/exchange/archive/2013/05/14/released-exchange-2013-server-role-requirements-calculator.aspx
    2TB is recommended maximum database size for Exchange 2013 databases.
    Here is the complete checklist to upgrade from exchange 2007 to 2013 : http://technet.microsoft.com/en-us/library/ff805032%28v=exchg.150%29.aspx
    Meanwhile, to reduce the risks and time consumed during the completion of migration process, you can have a look at this proficient application(http://www.exchangemigrationtool.com/) that would also be
    a good approach for 1500 users. It will help you to ensure the data security during the migration between exchange 2007 and 2013.

  • Any best practices sharing iPhoto library between iMac and MacBook Pro

    I have my iPhoto library on my iMac, but I would also like to view and edit keywords, etc on my MacBook Pro.
    Any suggestions on how to do this?

    Larry
    There are two ways to share, depending on what you mean by 'share'.
    If you want the other user to be able to see the pics, but not add to, change or alter your library, then enable Sharing in your iPhoto (Preferences -> Sharing), leave iPhoto running. On the other machine enable 'Look For Shared Libraries'. Your Library will appear in the other source pane.
    Remember iPhoto must be running on both machies for this to work.
    If you want the other user to have the same access to the library as you: to be able to add, edit, organise, keyword etc. then:
    Quit iPhoto ion both machines.
    Move the iPhoto Library Folder to an external HD set to ignore permissions
    On each machine in turn: Hold down the option (or alt) key and launch iPhoto. From the resulting dialogue, select 'Choose Library' and navigate to the new library location. From that point on, this will be the default library location. Both machines will have full access to the library, in fact, both accounts will 'own' it.
    However, there is a catch with this system and it is a significant one. iPhoto is not a multi-user app., it does not have the code to negotiate two users simultaneously writing to the database, and trying will cause db corruption. So only one user at a time, and back up, back up back up.
    Regards
    TD

  • Standard/best practice of Naming for MM authorisations

    Dear All,
    Can anybody please send the document related standard/best practice for naming convention for MM roles in authorisation.
    As we want to redo the authorisation system for a Client right from the scratch.(Already having SAP).
    to have an idea on standard/best practice of giving naming convention,i need the connected document.
    Can any body please send to my mail id: [email protected]
    Advance thanks.
    Regards,
    Dayanand

    Dear,
    Usually the role nomenclature is for the company as a whole
    Hence the std way is as below
    XX- 2 leter to repsenet the module
    XXXXXXXXXXXX-To represent the function(example-phy inv)
    XXXX-to repsent the plant it is applicable
    XXXX-to present- is it create/change/display/execute....
    XXXX- to repsent the variant
    hence it is as below
    Role is as below
    XX:XXXXXXXXXXXX:XXXX:XXXX:XXXX

  • Flat File load best practice

    Hi,
    I'm looking for a Flat File best practice for data loading.
    The need is to load a flat fle data into BI 7. The flat file structure has been standardized, but contains 4 slightly different flavors of data. Thus, some fields may be empty while others are mandatory. The idea is to have separate cubes at the end of the data flow.
    Onto the loading of said file:
    Is it best to load all data flavors into 1 PSA and then separate into 4 specific DSOs based on data type?
    Or should data be separated into separate file loads as early as PSA? So, have 4 DSources/PSAs and have separate flows from there-on up to cube?
    I guess pros/cons may come down to where the maintenance falls: separate files vs separate PSA/DSOs...??
    Appreciate any suggestions/advice.
    Thanks,
    Gregg

    I'm not sure if there is any best practise for this scenario (Or may be there is one). As this is more data related to a specific customer needs. But if I were you, I would handle one file into PSA and source the data according to its respective ODS. As that would give me more flexibility within BI to manipulate the data as needed without having to involve business for 4 different files (chances are that they will get them wrong  - splitting the files). So in case of any issue, your trouble shooting would start from PSA rather than going thru the file (very painful and frustating) to see which records in the file screwed up the report. I'm more comfortable handling BI objects rather than data files - coz you know where exactly you have look.

  • Best Practices vs. BPM

    Hi,
    I'm new on XI, and I have to define some best-practices and naming convention that will be used by our company. The goal is to try to start working in XI the right way.
    I found following sources of information :
    <a href="https://websmp207.sap-ag.de/~sapidb/011000358700004455192006E/NameConventions.pdf">SAP XI 3.0 Best Practices for Naming Conventions</a>
    <a href="/people/r.eijpe/blog/2006/05/22/d-xie-soap-part-3-determine-software-component-version-of-standard-sap-idocs-and-rfms Soap part 3: Determine Software Component Version of standard SAP IDocs and RFMs</a>
    <a href="/people/alwin.vandeput2/blog/2006/06/07/d-xie-soap-part-4-xi-software-component-architecture-for-point-to-point-scenarios Soap part 4: XI Software Component Architecture for Point-to-Point Scenarios</a>
    Here is my scenario (probably a classic one!) :
    SAP R/3 (Via RFC) ==> XI ==> 3rdParty Product (Via File Adapter)
    If I follow best-practices indicated in previous documents, I need 3 products with their corresponding SWCV :
    - One for SAP R/3 (custom function) : Contains the RFC
    - One for XI : Mapping Objects, Integration Scenarios / Processes
    - One for 3rdParty Product : Interface Objects
    Up to now... Does it sound correct based on your experience ??
    Now, here is my problem : BPM. It sounds logic to place the Integration Process in the XI SWCV. But I cannot use Message Interfaces from another SWCV (i.e. from my 3rd Party Product or from SAP R/3).
    There is probably something wrong in my understanding, and I would really appreciate if somebody could help !! How should I proceed ? I'm not sure I should put everything under the same SWCV to use BPM !
    Thanks in advance,

    Hello Anne,
    Sebastien asked me to respond to your question.
    I think you have already your question answered by Wojciech Gasior. You should use Abstract interfaces. The Abstract interface should be created in the same SWCV as the BPM.
    Your flow is:
    Outbound interface (RFC) ==> Message mapping 1 ==> Abstract interface ==> BPM ==> Abstract interface ==> Message mapping 2 ==>Inbound interface (File)
    Now the second question...how many SWCVs?
    If you follow ..
    - "SAP XI 3.0 Best Practices for Naming Conventions"
    - weblog "Structuring Integration Repository Content - Part 1: Software Component Versions" and
    - our weblog "D-XIE Soap part 4: XI Software Component Architecture for Point-to-Point Scenarios"...
    ... than you should have at least 3 SWCVs:
    ...but you could even think of 4.
    SCVW 1: "Sender application"
    - Outbound interface (RFC)
    SCVW 2: "XI as integration engine"
    - Message mapping 1
    SCVW 3: "XI as process engine"
    - Abstract interface
    - BPM
    - Abstract interface
    SCVW 2: "XI as integration engine"
    - Message mapping 2
    SCVW 4: "Receiver application"
    - Inbound interface (File)
    We advice not to place mapping programs in the sender or receiver SWCVs because those are not the applications who execute the mappings. XI executes the mappings. So it should be an XI SWCV.
    Why splitting the mappings from the BPM? The mappings are executed by the integration engine in the pipeline. The BPM is executed by process engine.
    Why not splitting the Abstract interfaces from the BPM? The abstract interfaces are the interfaces (signature) of the BPM.
    Is 4 SWCV not an enormous overkill in stead of using 1 or maybe 2? That’s up to you. Staying flexible in reuse means splitting up your SWCV.
    My opinion about using just one SWCV: It is wrong. Because you don’t have a good administration of the relation between interfaces and SWCV. See our weblog for a detailed description of the administration problem.
    My opinion about using two SWCV: Where do you put your mapping? In the Sender or in the Receiver SWCV? It is not the Sender application or the Receiver application which executes the mapping, but XI executes the mapping.
    My opinion about using three SWCV: For point-to-point scenarios it is the best. Now you stay flexible to upgrade to all kinds of scenarios.
    My opinion about using four SWCV: For BPM scenarios, it is the most flexible one. A good way of controlling the versioning of the SWCs.
    Hope it helps you and I would like to hear your and other SDNers opinion about our opinion.
    Kind regards,
    Alwin.
    (See also our D-XIE weblogs.)

  • Importing/Transcoding best practices

    Hello
    Apologies if this is a very basic question: I just returned from Africa and I have many hours of video as a result. All videos are 1080p, high quality and what not, and vary from 30s to 30 minutes in length. Therefore, some files are a few megabytes, other, a few gigabytes.
    The output for my project is no more than 7 minutes or so, therefore I have to cut a lot.
    My question: what is the best practice when you have this much video? Should I import and transcode everything (all 73GB of video) or is there a best practice to say cut what you need, then transcode that.
    I am using a MBPr 15" with 16GB and 512GB, so I've unchecked the "Copy files to Final Cut Events Folder" in order to not eat up my local HD.
    Anyhow, any advice would be really appreciated, thanks again
    Rob

    First, thank you for your quick reply and being very nice to me seeing as my questions are probably very basic. I have purchased a book on FCPx but it didn't deal with workflows very well, especially with what I'm dealing with.
    "I'd suggest you acquire a large external drive":
    Done, I'm using a 1TB USB3 drive specifically for this project. All videos are loaded onto the HD and when I began importing, I checked off the option to "Copy files to FCPx events folder" in order to centralize my content to the HD. That said, the Events Folder on the local HD DOES have content, pointers I believe. Should I be backing those up to the external HD?
    "Before capturing footage create a CDL (capture decision list) and capture what you intend to use."
    Not done. Some of the footage I have didn't lend itself for this unfortunately. For example, I had a gopro camera mounted on my head and another mounted on the head of a local tribesman while we went hunting for small game (their food of course). So the videos are long and I'd like to include portions of it into the final video. Is the only option for me to import and optimize the whole thing, or can I import, not optimize, review, cut, save the portions I like, then optimize those sections?
    I'm hoping you can spare a little more patience for me. I'm a photog so my workflow there is solid. I'm very new at this and I'd like to get better. The management of files for me is key so I want to get off on the right foot.
    Cheers
    Rob

  • Transfer iphoto library to external harddrive. Best practice?

    Need to transfer iphoto library to external harddrive due to space issues. Best practice?

    Moving the iPhoto library is safe and simple - quit iPhoto and drag the iPhoto library intact as a single entity to the external drive - depress the option key and launch iPhoto using the "select library" option to point to the new location on the external drive - fully test it and then trash the old library on the internal drive (test one more time prior to emptying the trash)
    And be sure that the External drive is formatted Mac OS extended (journaled) (iPhoto does not work with drives with other formats) and that it is always available prior to launching iPhoto
    And backup soon and often - having your iPhoto library on an external drive is not a backup and if you are using Time Machine you need to check and be sure that TM is backing up your external drive
    LN

  • Best practice SOD Library in AACG 8.5.1.278

    Hi,
    We require best practice SOD Library in AACG 8.5.1.278 for EBS R12. I have searched Oracle edelivery but could only find library for AACG 8.6 . We are using Windows X-64 platform.
    Can someone provide a download for the same in English language?
    Thanks.
    Abhishek Jain

    Hi,
    The best practice library files will be available in the directory you have downloaded and unzipped the GRC media pack from edelivery. Go to the specific directory in the system / server and select "Content" folder. You can copy the files on to your desktop and upload the same into AACG.
    Hope this helps. Let me know if you need any further assistance.
    Best Regards,
    Manjunath

  • Exchange 2010 - What is best practice for protection against corruption replication?

    My Exchange 2010 SP3 environment includes DAG with offsite passive copy.  DB is backed-up nightly with TSM TDP.  My predecessor also installed DoubleTake software to protect the DB against replication of malware or corruption to the passive MB
    server.  Doubletake updates offsite DB replica every 4-hours.  Understanding that this is ultimately a decision based on my company's risk tolerance, to the end, what is the probability of malware or corruption propagation due to replication? 
    What is industry best practice: do most companies have a 3rd, lagged copy of the DB in the DAG, or are 3rd party solutions such as DoubleTake commonly employed?  Are there other, better (and less expensive) options?

    Correct. If 8 days lagged copy is maintained then daily transaction log files of 8 days are preserved before replaying them to lagged database. This will ensure point-in-time recovery, as you can select log files that you need to replay into the database.
    Logs will get truncated if they have been successfully replayed into database and have expired their lagged time-stamp.
    Each database copy has a checkpoint file (.chk), which keeps track of transaction log files status.
    Command to check the Transaction Logs replay status:
    eseutil /mk <path-of-the-chk-file>  - (stored with the Transaction log files)
    - Sarvesh Goel - Enterprise Messaging Administrator

  • Best practice for frequently needed config settings?

    I have a command-line tool I wrote to keep track of (primarily) everything I eat and drink in the course of the day.  Obviously, therefore, I run this program many times every day.
    The program reads a keyfile and parses the options defined therein.  It strikes me that this may be awfully inefficient to open the file every time, read it, parse options, etc., before even doing anything with command-line input.  My computer is pretty powerful so it's not actually a problem, per se, but I do always want to become a better programmer, so I'm wondering whether there's a "better" way to do this, for example some way of keeping settings available without having to read them every single time.  A daemon, maybe?  I suspect that simply defining a whole bunch of environment variables would not be a best practice.
    The program is written in Perl, but I have no objection to porting it to something else; Perl just happens to be very easy to use for handling a lot of text, as the program relies heavily on regexes.  I don't think the actual code of the thing is important to my question, but if you're curious, it's at my github.  (Keep in mind I'm strictly an amateur, so there are probably some pretty silly things in my code.)
    Thanks for any input and ideas.

    There are some ways around this, but it really depends on the type of data.
    Options I can think of are the following:
    1) read a file at every startup as you are already doing.  This is extremely common - look around at the tools you have installed, many of them have an rc file.  You can always strive to make this reading more efficient, but under most circumstances reading a file at startup is perfectly legitimate.
    2) run in the background or as a daemon which you also note.
    3) similar to #1, save the data in a file, but instead of parsing it each time save it instead as a binary.  If you're data can all be stored in some nice data structure in the code, in most languages you can just write the block of memory occuppied by that data structure to a file, then on startup you just transfer the contents of the file to a block of allocated memory.  This is quiet do-able - but for a vast majority of situations this would be a bad approach (IMHO).  The data have to be structured in such a way they will occupy one continuous memory block, and depending on the size of the data block this in itself may be impractical or impossible.  Also, you'd need a good amount of error checking or you'd simply have to "trust" that nothing could ever go wrong in your binary file.
    So, all in all, I'd say go with #1, but spend time tuning your file read/write proceedures to be efficient.  Sometimes a lexer (gnu flex) is good for this, but often times it is also overkill and a well written series of if(strncmp(...)) statements will be better*.
    Bear in mind though, this is from another amateur.  I c ode for fun - and some of my code has found use - but it is far from my day job.
    edit: *note - that is a C example, and flex library is easily used in C.  I'd be surprised if there are not perl bindings for flex, but I very rarely use perl. As an after thought, I'd be surprised if flex is even all that useful in perl, given perl's built-in regex abilites.  After-after-thought, I would not be surprised if perl itself were built on some version of flex.
    edit2: also, I doubt environment variables would be a good way to go.  That seems to require more system calls and more overhead than just reading from a config file.  Environment variables are a handy way for several programs to be able to access/change the same setting - but for a single program they don't make much sense to me.
    Last edited by Trilby (2012-07-01 15:34:43)

  • ICloud document library organization best practices?

    While I think the iCloud document library could work pretty well if I was iOS-only, I'm still having some trouble organizing something that works with my work and personal Macs as well. A big gap is lack of an iOS version of Preview.
    But more importantly, I still keep documents organized by project, and I have a lot of project folders because, well, I have a lot of work! I'm not sure how to best reconcile that with the limitations imposed by iCloud Documents. And I'm not sure how/if Mavericks tags will really help.
    The best example I've seen of a best practice to organizing iCloud documents was in this blog post from the makers of iA Writer:
    http://ia.net/blog/mountain-lions-new-file-system/
    Their folder structure mirrored their workflow rather than projects, which I think could be interesting. They haven't updated it since Mavericks, and I'm curious how they might add tags. Perhaps tags would be used for projects?
    Right now, I tend to just keep documents in iCloud that I'm actively working on, since I might need to edit it at home or on my iPad. Once they're complete, I move them to the respective project folder on the Mac. Dropbox keeps the project folders in sync, which makes iCloud Documents feel redundant.
    This workflow still feels klugy to me.
    Basically, I'm asking, have you effectively incorporated iCloud Documents into your Mac workflow? What are your best practice recommendations?
    Thanks.
    Paul

    >
    Madhu_1980 wrote:
    > Hi,
    >
    >
    > As per the doc "Best Practices for Naming Conventions" https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/90b213c2-d311-2a10-89bf-956dbb63aa7f
    >
    > In this doc, we see there are no prefixes or suffixes like DT_ for data types, MT_ for Message types, SI_ for service interfaces OM_ for operation mappings (MM_ in message mappings in earlier versions).
    >
    > but i have seen some people maintain these kind of conventions.
    >
    > For larger projects, what is the best option,
    > A) to strictly follow the instructions in the above document and not to maintain the object type prefixes or suffixes.
    > B) or to have this kind of prefixes in addition to what mentioned in the naming conventions doc.
    >
    > which is preferable, from point of long term maintainance.
    >
    > i would appreciate an opinion/guideline from people who had worked on multiple projects.
    >
    > thanks,
    > madhu.
    I have seen projects where they are specific to having DT_, MT_ prefixes and also projects which dont use them.
    Even though you dont have a DT_ or MT_ prefix for DT and MT, it would be essential to have AA, OA, OS, IS etc defining a message or service interface that will give you an idea of the mode and direction of the interface.
    On a generic term, i strongly feel that the naming conventions suggested by the document is quite enough to accommodate a large number of projects unless something very specific pops up.

Maybe you are looking for

  • 5.0 Syncing issues with multiple I products

    All 5 members of my family have either and iPhone or iPod touch. Since the 5.0 upgrade all of our contacts and calendars have blended together. Is there a way to avoid this from happening?

  • No sound for AVI files with Quicktime

    I know this is a well discussed topic, but I can't seem to fix this problem. I upgraded to Quicktime 7.5 and quickly found that I cannot get audio/sound out of my AVI files (most are short clips taken with my Casio Exlim Digital Camera). I tried inst

  • Powershell Test-SPContentDatabase -ShowLocation GUID located, how do i find the culprits in sp2010

    Hi All, i'm sure this is a simple one, but I dont know it, im running the below command while testing sp2010 database in 2013 and get a few errors so have found i need to run the showlocation parameter to find where the issues are after running the P

  • Why is there no Accept button?

    Since iOS 7, I cannot download anything on my iphone because it tells me I need to Accept iTunes 'Terms and Conditions' first but there is NO accept button anywhere and the 'Send by Email' button doesnt work! Help please!!

  • How can i digital sign a file using JAVA?

    and how can i verify the digitially signed file was not altered by anyone using JAVA? please provide me with the simpliest way to do this. thanks a lot.