Best practice for storing user's generated file?

Hi all,
I have this web application that user draws an image off the applet and is able to send the image via mms.
I wonder what is the best practice to store the user image before sending the mms.
Message was edited by:
tomdog

java.util.prefs

Similar Messages

  • Best Practice for storing user preferences

    Is there something like best practice or guideline for storing user preferences for desktop application such as window position, layout setting, etc. ?

    java.util.prefs

  • Best Practice for storing PDF docs

    My client has a number of PDF documents for handouts that go
    with his consulting business. He wants logged in users to be able
    to download the PDF docs for handouts at training. The question is,
    what is the 'Best Practice' for storing/accessing these PDF files?
    I'm using CF/MySQL to put everything else together and my
    thought was to store the PDF files in the db. Except! there seems
    to be a great deal of talk about BLOBs and storing files this way
    being inefficient.
    How do I make it so my client can use the admin tool to
    upload the information about the files and the files themselves,
    not store them in the db but still be able to find them when the
    user want's to download them?

    Storing documents outside the web root and using
    <cfcontent> to push their contents to the users is the most
    secure method.
    Putting the documents in a subdirectory of the web root and
    securing that directory with an Application.cfm will only protect
    .cfm and .cfc files (as that's the only time that CF is involved in
    the request). That is, unless you configure CF to handle every
    request.
    The virtual directory is no safer than putting the documents
    in a subdirectory. The links to your documents are still going to
    look like:
    http://www.mysite.com/virtualdirectory/myfile.pdf
    Users won't need to log in to access these documents.
    <cfcontent> or configuring CF to handle every request
    is the only way to ensure users have to log in before accessing
    non-CF files. Unless you want to use web-server
    authentication.

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • What are the Best Practices for Optimizing Images in InDesign Files

    Is there a best practice for using images InDesign to optimize the document before converting to a PDF? Specifically, what I'm asking is, will the PDF file compress better if the images are cropped prior to placing them in Indesign? I'd like to know the answer for both creating PDF files for printing using images that are 300dpi and for creating PDF files for online delivery using images that are 72dpi. I have an employee that insists images need to be cropped to actual dimensions before placing in the InDesign document. I've never done it that way and believe that her recommended process is way too time consuming and leaves you with no leeway to tweak your page design since the images are tightly cropped.

    As for absolute cropping, I agree with your stance. Until the layout is fixed, preserving your ability to easily manipulate photo size and positioning is key.
    Some clever image management methods have been described in the discussion forums, and one that appealed most to me was the use of duplicate linked image folders. Having a high-res (CMYK) folder and a low-res (RGB) folder to switch between for different output enables you to use both to your advantage. Use the low-res images for layout, for internal proofing, and for EPUB/online PDF/HTML output. Then it's simply a quick switch to the high-res image folder for print purposes. You can easily prepare the alternate collection of images with a Photoshop batch convert script or with the Photoshop Image Processor. Save your presets!

  • Best Practice for storing a logon to website in a desktop java app

    Hoping someone well versed in java related security best practices can point me in the right direction.
    I have a small java PC application that uses the Soap API to send various data to a 3rd party.
    Currently I am storing the logon credentials for this 3rd party in a local database used by the application.
    The username / password to connect to this database is encrypted and never accessed in clear text in the code.
    (Although, since the application is stand alone, everything needed to decrypt the database credentials is packaged
    with the application. It would not be easy to get the clear text credentials, but possible)
    The caveat in my case is that the user of the application is not even aware (nor should be) that the application is interacting with
    the 3rd party API at all. All the end user knows is that an entity (that they already have a relationship with) has asked them to
    install this application in order to provide the entity with certain data from the user.
    Is there a more secure way to do this will maintaining the requirement that the user not need know the logon credentials to the 3rd party?

    Moderator advice: Don't double post the same question. I've removed the other thread you started in the Other Security APIs, Tools, and Issues forum.
    db

  • Best practices for storing logon/password info

    I'm curious what are the best practices and/or what other organizations are using to store the logon/password information that needs to be shared by several users. This could be, for example, RFC logon that is used in several interfaces; FTP logon, etc. Such information may need to be accessible to all the developers yet should be stored safely.
    In my previous assignments this was usually managed by a Basis admin, but we don't have a designated admin here so it needs to be handled by developers. A suggestion has been made to store it in a Z table in SAP, but we're trying to explore other options.
    Thank you.

    The SecureStore is a protected area only accessible via the SAP kernel functions. It is SAP standard (used by transactions such as SM59, etc) and is accessed by the system at runtime.
    But if you only want these connections to be temporarily available (so, without stored logon data) then there is a guru solution you might want to consider for those access in ABAP systems.
    For general password management of generic users or large numbers of them you can alternately also consider a [password-vault|http://www.google.com/#hl=de&source=hp&biw=1276&bih=599&q=password+vault&rlz=1R2ADSA_deCH392&aq=f&aqi=g3&aql=&oq=&gs_rfai=&fp=ec103d87630c3cc0] . These can however typically not be accessed at runtime.
    Shall I move this to the security forum, ABAP general, NW Admin or is someone still going to get themselves Guestified here? 
    Cheers,
    Julius

  • PKGBUILD best practice for autotools and missing required files

    I am trying to update one of my packages in the AUR.  Upstream using GNU automake/autoconf tools and has worked just fine for previous versions.  This time around, the download from upstream is missing several of the mandatory files required by autoconf.  I am trying to figure out the best way to deal with this.
    1.  I can add just create them, and distribute them with the Tarbell, and push them into src directory prior to invoking autoconf.
    or
    2. I can use the --add-missing flag, but that requires the running of autoconf multiple times (unless I am confused) 
    What is the best practice when files such as NEWS and README are missing?

    I highly recommend you review Brad Hedlund's videos regarding UCS networking here:
    http://bradhedlund.com/2010/06/22/cisco-ucs-networking-best-practices/
    You may want to focus on Part 10 in particular, as this talks about running UCS in end-host mode without vPC or VSS.
    Regards,
    Matt

  • Best practice for storing/loading medium to large amounts of data

    I just have a quick question regarding the best medium to store a certain amount of data. Currently in my application I have a Dictionary<char,int> that I've created, that I subsequently populate with hard-coded static values.
    There are about 30 items in this Dictionary, so this isn't presented as much of a problem, even though it does make the code slightly more difficult to read, although I will be adding more data structures in the future with a similar number of items.
    I'm not sure whether it's best practice to hard-code these values in, so my question is, is there a better way to store this information, retrieve and load it at run-time?

    You could use one of the following methods:
    Use the application.config file. Upside is that it is easy to maintain. Downside is a user could edit it manually as its just an xml file.
    You could use a settings file. You can specify where the setting file is persisted including under the user's profile or the application. You could serialize/deserialize your settings to a section in the settings. See
    this MSDN help section
    on details abut the settings.
    Create a .txt, .json, or .xml file (depending on the format you will be deserializing your data) in your project and have it be copied to the output path with each build. The upside is that you could push out new versions in the future of the file without
    having to re-compile your application. Downside is that it could be altered if the user has O/S permissions to that directory.
    If you really do not want anyone to access it and are thinking of pushing out a new application version every time something changes you could create a .txt, .json, .xml file (depending on the format you will be deserializing your data) just like the previous
    step but this time mark it as an embedded resource in your project (you can do this in the properties of the  file in visual studio). It will essentially get compiled in your application. Content retrieval is outlined in
    this how to from Microsoft and then you just deserialize the retrieved content the same as the previous step.
    As far as formats of your data. I recommend you use either XML or JSON or a text file if its just a flat list of items (ie. list of strings). Personally I find JSON much easier to read compared to XML and change and there are plenty of supported serializers
    out there. XML is great too if you need to be strict as to what the schema is.
    Mark as answer or vote as helpful if you find it useful | Igor

  • Best Practice for Storing Sharepoint Documents

    Hi,
    Is there a best practice where to store the documents of a Sharepoint? I heard some people say it is best to store sharepoint documents directly into file system. Ohters said that it is better to store sharepoint documents into a SQL Server.

    What you are referring to is the difference between SharePoint's native storage of documents in SQL, and the option/ability to use SQL's filestream functionality for Remote BLOB Storage (also known as RBS). Typically you are much better off sticking with
    SQL storage for BLOBs, except in a very few scenarios.
    This page will help you decide if RBS is right for your scenario:
    https://technet.microsoft.com/en-us/library/ff628583.aspx?f=255&MSPPError=-2147217396
    -Corey

  • Best practice for end user menu pages

    Hi
    here is my goal :
    i want to add a link on the end user menu offering the user the ability to reset some LDAP fields (for example reset default values for mail settings after the user made a wrong customization).
    I saw all links on the end user menu are pointing to jsp pages. must i do the same or can i achieve my goal with only workflows and forms in the BPE ?
    is there somewhere a good example or a best practice desciption of such a customization ?
    thanks a lot

    Hi
    here is my goal :
    i want to add a link on the end user menu offering
    the user the ability to reset some LDAP fields (for
    example reset default values for mail settings after
    the user made a wrong customization).
    I saw all links on the end user menu are pointing to
    jsp pages. must i do the same or can i achieve my
    goal with only workflows and forms in the BPE ?
    is there somewhere a good example or a best practice
    desciption of such a customization ?
    thanks a lotDo you mean adding additional tabs to the accounts information (default being identity, assignments, security, and attributes). So, you want an additional tab such as 'LDAP attributes' after attributes, once you have assigned LDAP resournce to that user, right?
    Cheers,
    Kaushal Shah

  • On best practices for minimizing user impact for db/dw migrations

    Hi Everybody!
    Our department will be undertaking the migration of our ODS and Datawarehouse to Oracle 10g in the coming months and I wanted to query this group in anticipation for any good tips, DOs and DON'Ts and best practices that you might want to share with the group on how to minimize user impact, especially when some of the queries that different departments use have no known author and would need to be migrated to a different database dialect. Our organization is a large one and therefore efficacy in communicating the benefits of our project and handling a large number of user questions will be key items in our conversion plan.
    Thanks a lot to all those who can contribute to this thread, hopefully it will become a good way to record the expertise of this group's members on this very specific project category.
    -Ignacio

    BTW it is not clear from WHAT you want to migrate? Other DB or simply other Oracle version?
    OK anyway speaking about Data migration strategy there is at least one valueable article
    http://www.dulcian.com/papers/The%20Complete%20Data%20Migration%20Methodology.html
    Speking about technical execution you can look at my article Data migration from old to new application: an experience at http://www.gplivna.eu/papers/legacy_app_migration.htm
    None of them focuse on datawarehouse though.
    Gints Plivna
    http://www.gplivna.eu

  • Best practices for approval of MRP generated PRs

    Hi, all.  I'd very much like to route our MRP generated purchase requisitions directly to the Purchasing department without the need for additional approvals, regardless of $ amount.
    PRs not generated by MRP (manual) will still need to pass through a release strategy.
    What is the standard industry practice for processing MRP generated PRs? 
    What have you seen?
    Thanks in advance.
    Sarah

    Hi,
    Well i haven't come across a situation which requires an approval of MRP generated PR's.
    If the process of loading demands is controlled, then the output from MRP is by default controlled, unless there is some specific enhancements put in MRP calculation which does a change.
    So if there is no such enahancements, then i do not see a need for the same.
    So from a good practise perspective, i would look at controlling the demand & the manner in which it is entered into the system.
    Regards,
    Vivek

  • Best Practice for Global Constants in IRPT files?

    Hi,
    For Global constants that need to be accessed inside IRPT files and would only change as the code migrates from Dev->Test-> Production, what is the preferred method of storing and accessing these constants in a reusable way so that we donu2019t have to change the code of individual irpt files.  Additionally, this should not be stored in a code file, as the QA team is uncomfortable with changing code files as we migrate through environments. 
    It seems like just using a constant only .js file would work, but are there other methods and what is the preferred way of configuring global constants/variables?
    Thanks for the help.
    Kerby

    Hi Kerby,
    I was planning on putting all the config values in the database.  Then there is no changing of files that are inside the deployment package.
    Another idea, particularly if this is not a database app:
    What I did for a Portal dynpro app was have multiple properties files named with the host name.
    myDevHostName.properties and myQAHostName.properties and myProdHostName.properties. 
    They all got deployed with the app.  The code used the host name to open the appropriate file.
    Probably need a custom action block though because I only saw the IP address in the attribute
    list and not the server name.  And not so good when system names change...
    http://<servername>:<portnumber>/XMII/PropertyAccessServlet?mode=List&Content-Type=text/html
    --Amy Smith
    --Haworth

  • SolMan CTS+ Best Practices for large WDP Java .SCA files

    As I know, CTS+ allows ABAP change management to steward non-ABAP objects.  With ABAP changes, if you have an issue in QA, you simply create a new Transport and correct the issue, eventually moving both transports to Production (assuming no use of ToC).
    We use ChaRM with CTS+ extensively to transport .SCA files created from NWDI. Some .SCA files can be very large: +300MB. Therefore, if we have an issue with a Java WDP application in QA, I assume we are supposed is to create a second Transport, attach a new .SCA file, and move it to QA. Eventually, this means moving both Transports (same ChaRM Document) to Production, each one having 300 MB files. Is this SAP's best practice, since all Transports should go to Production? We've seen some issues with Production not being to happy with deploying two 300MB files in a row.  What about the fact that .SCA files from the same NWDI track are cumulative, so I truly only need the newest one. Any advice?
    FYI - SAP said this was a consulting question and therefore could not address this in my OSS incident.
    Thanks,
    David

    As I know, CTS+ allows ABAP change management to steward non-ABAP objects.  With ABAP changes, if you have an issue in QA, you simply create a new Transport and correct the issue, eventually moving both transports to Production (assuming no use of ToC).
    We use ChaRM with CTS+ extensively to transport .SCA files created from NWDI. Some .SCA files can be very large: +300MB. Therefore, if we have an issue with a Java WDP application in QA, I assume we are supposed is to create a second Transport, attach a new .SCA file, and move it to QA. Eventually, this means moving both Transports (same ChaRM Document) to Production, each one having 300 MB files. Is this SAP's best practice, since all Transports should go to Production? We've seen some issues with Production not being to happy with deploying two 300MB files in a row.  What about the fact that .SCA files from the same NWDI track are cumulative, so I truly only need the newest one. Any advice?
    FYI - SAP said this was a consulting question and therefore could not address this in my OSS incident.
    Thanks,
    David

Maybe you are looking for

  • What is difference between WebPage and WebForm

    Can you anyone Explain

  • Why I cannot modify the access permissions?

    On the Fedora8 OS I issue the follow command: [root@localhost local]#chmod o-r /usr/local [root@localhost ~]#chmod o+w /root/rar Then I check the privilege using GUI tool and I find the access permissions have not been changed. Why I cannot modify th

  • 2008 R2 WSUS not getting reports from 2008 Standard Servers

    Greetings, I'm running into an issue on my domain network where I just implemented a WSUS on Windows Server 2008 R2. All of my clients have reported since then with the exception of two servers. Thus far I've checked the registry keys to make sure GP

  • Reg : Column Size -

    Hi All, I've got a doubt regarding alter the column size. There's a table column which is currently 255 and I want to increase it to max i.e. 4000 Char It is used in many places by various objects - Procs/Packages/Triggers/Views... Is it advisable to

  • POJOs and Transactions

    I have a class which does writes to the database and also updates a cache. If db is rolled back, i need to rollback the cache... is it possible to associate these java objects to the transaction so that they are automatically rolled back. for isntanc