Best practice: path to self

I did some searching but didn't find an answer so am posting.
What is the best practice for finding the absolute path to a class?
in class MYCLASS_:
String cp = System.getProperty( "class.path" );
Should I start witn cp then loop through the class path looking for MYCLASS_.class?
The motivation for this is that my package will contain a directory "data" to and from which MYCLASS_ will read and write.

Thanks Levi!
This did the trick:
package sj;
import java.net.URL;
import java.lang.ClassLoader;
import java.lang.Class;
import java.lang.Package;
public class selftestme {
    public selftestme() {
        URL u = null;
        Class clss = this.getClass();
        u = clss.getResource("data" );
        String nme = clss.getName();
        System.out.println( "class name is: " + nme );
        if( u == null ) {
            System.out.println( "URL is null" );
        } else {
            System.out.println( "URL is " + u.toExternalForm() );
    public static void main( String[] argv ) {
        selftestme stm = new selftestme();
}

Similar Messages

  • Best Practice For Referencing JPEG Path Using Servlets To Prepare HTML IMG

    I am migrating a legacy app from Tomcat 5 to Weblogic 11g (10.3). In the legacy app, servlets write HTML that uses relative paths for <IMG src="../images/img.jpeg"> and <script src="../javascript/js.js">. The app is deployed as an exploded archive. Unfortunately, none of the images or script are being loaded. I've tried using http://serverIP:portNum/contextName/images/img.jpeg"> but it doesn't work. I've also checked to get the name of the context in the servlet and it's the root context. Could it have something to do with me having to append .war onto the application when I deploy it? Would it help if I deployed it as a war inside an ear? Basically, I want some best practices for doing this on Weblogic 11g. There are a lot of images and javascript and I'm really hoping they don't have to be inserted using ClassLoader.getResourceAsStream()... thank you.

    Best Practice:
    1. Move Static files like images, css, java scripts to a web server infrastructure if available.
    2. If this is not your case, then please send your directory information how you have packaged your EAR. I can advice :)

  • Best Practice Employee Self-service based on ECC 6.0

    Hello,
    Where do I find info about best-practice installation for Employee Self-service based on ECC 6.0?
    I only seem to find info about older versions that are not valid anymore.
    Regards,
    Fredrik

    Fredrik,
    Please check this
    http://help.sap.com/saphelp_erp2005/helpdata/en/f6/263359f8c14ef98384ae7a2becd156/frameset.htm
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/webcontent/uuid/a0b47200-9c6d-2910-afa6-810c12eb7eb3
    Hope this helps.
    Have a great weekend
    Cheers,
    Sandeep Tudumu

  • SAP Best Practice for Self Service Procuement

    Hello All,
    I am trying to find Best Practice for Self Service Procuement in the help.sap.com.
    But I couldn't find them. Please help me to locate.

    HI Chek these below links if usefull for you
    http://www50.sap.com/businessmaps/59E32671A32A411692387571253E292A.htm
    help.sap.com/.../SAP_Best_Practices_whatsnew_AU_V3600_EN.ppt
    www50.sap.com/.../DADF68FA02AB4E0482021E98D8BB986F.htm
    Thanks,
    Batchu

  • SQL Server installation paths best practices

    In my company we're planning to setup a new (consolidated) SQL Server 2012 server (on Windows 2012 R2, VMWare). Current situation is there is a SQL Server 2000, a few SQL Server 2008 Express and a lot of Access databases. For the installation I'm wondering
    what the best selections for the various installation paths are. Our infra colleagues (offshore) have the following standard partition setup for SQL Server servers:
    C:\ OS
    E:\ Application
    L:\ Logs
    S:\ DB
    T:\ TEMPDB
    And during the installation I have to make a choice for the following
    Shared feature directory: x:\Program Files\Microsoft SQL Server\
    Shared feature directory (x86): x:\Program Files\Microsoft SQL Server\
    Instance root directory (SQL Server, Analysis Services, Reporting Services): x:\Program Files\Microsoft SQL Server\
    Database Engine Configuration Data Directories:
    Data root directory: x:\Program Files\Microsoft SQL Server\
    User database directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    User database log directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Temp DB directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Temp DB log directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Backup directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Analysis Services Configuration Data Directories:
    User database directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    User database log directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Temp DB directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Temp DB log directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Backup directory: x:\Program Files\Microsoft SQL Server\MSSQL11.x\MSSQL\...
    Distributed Replay Client:
    Working Directory: x:\Program Files (x86)\Microsoft SQL Server\DReplayClient\WorkingDir\
    Result Directory: x:\Program Files (86)\Microsoft SQL Server\DReplayClient\ResultDir\
    So I'd like some on assistance on the filling in the x drive letters. I understand it's best practice to seperate the data files and the logs files. But should that also be the case for TempDB? And should both the database and tempdb log files go to the
    same log paritition then? What about the backup directories? Any input is very much appreciated!
    Btw, I followed the http://www.sqlservercentral.com/blogs/basits-sql-server-tips/2012/06/23/sql-server-2012-installation-guide/ guide for the installation (Test server now).

    You can place all installation libraries on E:\ Drive.
    >>So I'd like some on assistance on the filling in the x drive letters. I understand it's best practice
    to seperate the data files and the logs files. But should that also be the case for TempDB? And should both the database and tempdb log files go to the same log paritition then? What about the backup directories? Any input is very much appreciated!
    You can place tempdb data files on T drive and I prefer to place tempdb log and user database log file
    on the same drive i.e is L:\ Drive.
    >>Backup directories
    If you are not using any third party tool then i would prefer to create separate drive for backup.
    Refer the below link for further reading
    http://www.brentozar.com/archive/2009/02/when-should-you-put-data-and-logs-on-the-same-drive/
    --Prashanth

  • Path to best practices.

    Hi experts,
    I am searching for SAP best practises for XI....i am unable to trace the path in sdn and service.sap.com
    Could you let me know the path for it.
    Thanks in advance.
    Kiran.

    Hi Kiran,
    If you are interested in SAP Best Practices please follow the links below for more information:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8519e590-0201-0010-6280-d0766e58de6a
    •  SAP Best Practices on the SAP Service Marketplace
    •  SAP Best Practices for High Tech in the SAP Help Portal
    http://help.sap.com/bp_hightechv1500/HighTech_DE/index.htm
    also read :
    /people/marian.harris/blog/2005/06/23/need-to-get-a-sap-netweaver-component-implemented-quickly-try-sap-best-practices
    *Pls: Reward points if helpful*
    Regards,
    Jyoti

  • MSI Self Heal/Repair. Best Practices

    I understand that an installed application (MSI) can self-heal or repair for several reasons. User registry keys generated on first launch or if a Key file/registry item is missing from
    a component, to name a couple examples.
    We install applications manually via file share, during MDT or Zero touch OS deployment, and Configuration Manager 2012 SP1 deployments. We installed Configuration Manager 2012 in our environment
    a few months ago before that we had 2007.
    The Install Source location can vary and file servers/Distribution Points can change or be replaced, breaking an MSI's ability to self-heal or repair.
    How do i manage this? What is the best practice?
    I know SCCM 2012 has the ability to manage this to some degree when you input a Product Code in the Application Deployment Type. Does this feature only work for Applications installed by SCCM
    2012? What about applications that are preexisting on a machine. Can SCCM manage those?
    I am concerned with XP and Windows 7 systems.

    Mavtech, i am not referring to applications being removed from a device. I am talking about the Operating Systems ability to self-heal/repair an installed application.
    athmanb, you are correct when you say that MSI files are cached locally. However you may not be aware that this is not the original version of the MSI. This MSI has all the CAB files removed from it. So if a file is missing from the installation it will
    not be able to repair from the MSI located in c:\windows\installer
    If you break open an MSI using Wise or Admin Studio you will see that the files of the software are contained in Components. A Component can have one or more files in it. Every Component has a Primary Key file. MSI will check for the existance of that Primary
    Key file if MSI does not find that file it will attempt to repair that Compontent. it will look in the registry for the Source Location of the MSI. If it cannot find that MSI it will prompt the user for the original location.
    Actually yes Windows does rely on a Package source for MSI repairs as you can see above. You can see the Source Location in the registry.
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{xxxxxxx}  "InstallSource"
    Where xxxxxxx is the Product Code of the installed application.
    If you install via SCCM you will see the location of the Distribution Point. If you install manually you will see the location of the File Share you installed from.

  • Best practice on sqlite for games?

    Hi Everyone, I'm new to building games/apps, so I apologize if this question is redundant...
    I am developing a couple games for Android/iOS, and was initially using a regular (un-encrypted) sqlite database. I need to populate the database with a lot of info for the games, such as levels, store items, etc. Originally, I was creating the database with SQL Manager (Firefox) and then when I install a game on a device, it would copy that pre-populated database to the device. However, if someone was able to access that app's database, they could feasibly add unlimited coins to their account, unlock every level, etc.
    So I have a few questions:
    First, can someone access that data in an APK/IPA app once downloaded from the app store, or is the method I've been using above secure and good practice?
    Second, is the best solution to go with an encrypted database? I know Adobe Air has the built-in support for that, and I have the perfect article on how to create it (Ten tips for building better Adobe AIR applications | Adobe Developer Connection) but I would like the expert community opinion on this.
    Now, if the answer is to go with encrypted, that's great - but, in doing so, is it possible to still use the copy function at the beginning or do I need to include all of the script to create the database tables and then populate them with everything? That will be quite a bit of script to handle the initial setup, and if the user was to abandon the app halfway through that population, it might mess things up.
    Any thoughts / best practice / recommendations are very appreciated. Thank you!

    I'll just post my own reply to this.
    What I ended up doing, was creating the script that self-creates the database and then populates the tables (as unencrypted... the encryption portion is commented out until store publishing). It's a tremendous amount of code, completely repetitive with the exception of the values I'm entering, but you can't do an insert loop or multi-line insert statement in AIR's SQLite so the best move is to create everything line by line.
    This creates the database, and since it's not encrypted, it can be tested using Firefox's SQLite manager or some other database program. Once you're ready for deployment to the app stores, you simply modify the above set to use encryption instead of the unencrypted method used for testing.
    So far this has worked best for me. If anyone needs some example code, let me know and I can post it.

  • Best practice for multi-language content in common areas

    I've got a site with some text in header/footer/nav that needs to be translated between an English and Spanish site, which use the same design. My intention was to set up all the text as content to facilitate. However, if I use a standard dialog with the component's path set to a child of the current page node, I would need to re-enter the text on every page. If I use a design dialog, or a standard dialog with the component's path set absolutely, the Engilsh and Spanish sites will share the same text. If I use a standard dialog with the component's path set relatively (eg path="../../jcr:content/myPath"), the pages using the component would all need to be at the same level of the hierarchy.
    It appears that the Geometrixx demo doesn't address this situation, and leaves copy in English. Is there a best practice for this scenario?

    I'm finding that something to the effect of <cq:include path="<%= strCommonContentPath + "codeEntry" %>" resourceType ...
    works fine for most components, but not for parsys, or a component containing a parsys. When I attempt that, I get a JS error that says "design.path is null or not an object". Is there a way around this?

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Best Practice for Flat File Data Uploaded by Users

    Hi,
    I have the following scenario:
    1.     Users would like to upload data from flat file and subsequently view their reports.
    2.     SAP BW support team would not be involved in data upload process.
    3.     Users would not go to RSA1 and use InfoPackages & DTPs. Hence, another mechanism for data upload is required.
    4.     Users consists of two group, external and internal users. External users would not have access to SAP system. However, access via a portal is acceptable.
    What are the best practice we should adopt for this scenario?
    Thanks!

    Hi,
    I can share what we do in our project.
    We get the files from the WEB to the Application Server in path which is for this process.The file placed in the server has a naming convention based on ur project,u can name it.Everyday the same name file is placed in the server with different data.The path in the infopackage is fixed to that location in the server.After this the process chain trigers and loads the data from that particular  path which is fixed in the application server.After the load completes,a copy of file is taken as back up and deleted from that path.
    So this happens everyday.
    Rgds
    SVU123
    Edited by: svu123 on Mar 25, 2011 5:46 AM

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best practice for GSS design

    Please advice as to what records needs to go in Public DNS server in a scenario where i have url say x.y.com which is listed in the Domain List of the GSS-P, sot that GSS-P or GSS-S can handout the respective external VIP to the clients requesting the url in case one of the GSS/site (GSS_P and GSS-S) goes unavailable
    Please also specify the communication path of a client accessing x.y.com.
    Advice the best practice
    Thanks in advance
    ~EM

    Hi,
    I am new to GSS. I would appreciate if some can help me with the deisgn. I want to know if I need to put the GSS inline after the inernet facing firewall and befor the ACE module. OR use it as one arm mode. Trying to figure out the best fit in the design.
    FWSM1 >>> GSS >>> ACE
    or
    just put the GSS as one arm mode between the FWSM1 >>> ACE
                                                                                                         |
                                                                                                    GSS
    Thanks in advance,
    Nav

Maybe you are looking for

  • AP supplimental data.

    Hi,         There is some problem in AP Supplemental Data print from Payment Run. Although the Vendor 3033990 and 3034209 have separate supplemental data in SAP, both of their supplementals were printed together in the single spool and it went to the

  • Moving to different weeks in the portal timesheet

    Hi We are operating Web Dynpro ABAPESS timesheet.  We are currently going through the patching process and have encountered a problem.  When using the timesheet it is no longer possible to move to different weeks.  Below are the settings we have for

  • My files is double in itunes.

    How can I remove all the double files in Itunes? In some strange way, all my files is double, and it is double on my hardrive in Itunes folder. The songs are stored like this in the folder: fame.m4a and fame1.m4a. It is too much work to delete them o

  • AAC vs. Apple Lossless, Library Setup

    I'm just starting out with setting up my iTunes Library on a new hard drive (20GB), which I'm pretty much going to devote to music. Under the Preferences > Advanced Tab > Importing, we have the choice of encoding, and I'm not sure which would be bett

  • Lightroom 4.2 not yet available

    I recently installed the CC Application Manager and successfully downloaded PS CS6.  Now I'm ready to install Lightroom 4.2, but the only Lightroom version available in the App Manager is 4.1.  How can I get 4.2 without having to first download 4.1 a