Best practice to verify 'public' code for unauthorized changes

Hi everybody,
I have to make sure, that some "critical" classes within my project are only executed during runtime, when they are authorized by me. On the other hand these classes are not secret and the source is available for others.
I thought about signing the jar and verifing the jar during runtime but this check could be already altered.
What do you think is the best solution for my problem ? The "verifing code" can be obfuscated if needed.

Hi,
Per my understanding, you might want to find a better way to create Lookup field.
Though there are different ways to create Lookup field(declaratively or programmatically), there would be no much difference between them in performance or maintenance.
If going in the declarative way, the whole definition of the field will be hardcoded, user simply need to deploy the solution, Lookup field to a specific list will be there.
If creating the field programmatically, there would be great flexibility when provisioning a field as we will be able to specify the properties dynamically.
Thus, you might need to make a choice depends on the scenario in the actual production environment.
Thanks
Patrick Liang
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected]

Similar Messages

  • What are the best practices to migrate VPN users for Inter forest mgration?

    What are the best practices to migrate VPN users for Inter forest mgration?

    It depends on a various factors. There is no "generic" solution or best practice recommendation. Which migration tool are you planning to use?
    Quest (QMM) has a VPN migration solution/tool.
    ADMT - you can develop your own service based solution if required. I believe it was mentioned in my blog post.
    Santhosh Sivarajan | Houston, TX | www.sivarajan.com
    ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
    Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
    Blogs: Blogs
    Twitter: Twitter
    LinkedIn: LinkedIn
    Facebook: Facebook
    Microsoft Virtual Academy:
    Microsoft Virtual Academy
    This posting is provided AS IS with no warranties, and confers no rights.

  • Best practice standard User Acess Test for WIN2012 AD

    What is the Best practice standard User Acess Test  for WIN2012 AD

    Hello,
    as before, add a computer to the domain and log on with a domain user account to the computer.
    You should be able from the client machine to open the sharedfolders on the DCseither with:
    \\DCName\sysvol
    \\DCName\netlogonor \\NetBiosDomainName\sysvol
    \\NetBiosDomainName\netlogon
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • I can't access my cell to verify a code for iCloud keychain?.... What do I do to get keychain going!!!!!

    I can't access my cell to verify a code for iCloud keychain?.... What do I do to get keychain going!!!!!

    Hi freddyg5316,
    Sorry to hear you are having issues with this. If you are having troubles receiving the verification code via SMS when setting up iCloud Keychain, you may find the information and steps outlined in the following article helpful, in particular this portion:
    If you didn't receive the verification code via SMS
    Make sure that you have a strong cellular network connection on your phone.
    Make sure that your phone number can get SMS messages. To check, ask someone to send you a text message.
    Make sure that the correct phone number is associated with your account:
    On your iPhone, iPad, or iPod touch, tap Settings > iCloud > Keychain > Advanced. (In iOS 7, tap Settings > iCloud > Account > Keychain.) Make sure the phone number under Verification Number is correct. If not, enter another phone number. 
    On your Mac, choose Apple menu > System Preferences. Click iCloud, then click Options next to Keychain. (In OS X Mavericks or earlier, click iCloud, then click Account Details.) Make sure the phone number under Verification number is correct. If not, enter another phone number.
    If you can't access a device that has iCloud Keychain enabled, contact Apple Support and verify your identity to get help setting up iCloud Keychain.
    Get help using iCloud Keychain - Apple Support
    Regards,
    - Brenden

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • Best practice DNS in VPN environment for Lync2013 clients

    So I do have those site2site VPNs to connect the small branch offices to the main office. Internal DNS makes sure, that the branch offices can acess all the servers/services in the main office with their domain.local namespace.
    In such a scenario will the Lync2013 clients connect through the VPN to the internal sites due to both lyncdiscover and lyncdiscoverinternal being available?
    Wouldn't it cause way less burden on the VPN routers if clients would simply go out to the internet and connect from the external side so all the Lync traffic does not have to be stuffed through the VPN pipe? I dont see the point to encrypt the traffice
    once more.
    Thanks for your suggestions about best practices!
    HST

      Hi,
    When users connect to the corporate network using a VPN client, Lync media traffic is sent through the VPN tunnel. This configuration can create additional latency and jitter because media traffic must pass through an additional layer of encryption and
    decryption. The issue is compounded when the VPN concentrator is busy.
    If you want to connect Lync server from public network you need to deploy an Edge server.
    The solution to force VPN traffic through the Edge Servers must allow external Lync clients connected through VPN, you can refer to the part of "Solution Configuration" in the link below:
     http://blogs.technet.com/b/nexthop/archive/2011/11/15/enabling-lync-media-to-bypass-a-vpn-tunnel.aspx
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • Best practice: Using break statement inside for loop

    Hi All,
    Using break statment inside FOR loop is a best practice or not?
    I have given some sample code:
    1. With break statement
    2. With some boolean variable that decide whether to come out of the loop or not.
    for(int i = 0; i < 10; i++){
    if(i == 5){
    break;
    boolean breakForLoop = false;
    for(int i = 0; i < 10 && !breakForLoop; i++){
    if(i == 5){
    breakForLoop = true;
    The example may be a stupid one. But i want to know which one is good?
    Thanks and Regards,
    Ashok kumar B.

    Actually, it's bad practice to use break anywhere other than in conjunction with a switch statement.Presumably, if you favour:
    boolean test = true;
    while (test)
      test = foo && bar;
      if (test)
    }overfor (;;)
      if (! ( foo && bar) ) break;
    }then you also favour
    boolean test = foo && bar;
    if (test)
    }overif (foo && bar)
    }Or can you justify your statement with any example which doesn't cause more complexity, more variables in scope, and multiple assignments and tests?

  • Best Practice Question - Activate Company Code - Open client or transport?

    Hello.
    When activating Company Codes in a newly productive system, is it best practice to do it directly in an open client, or to change the setting via transport?
    Thanks and Regards,
    D Flores

    What do you mean by activating company code?
    Is it productive check box you are talking about in OBY6.
    If so, open client for manual change, make the setting and put it back the client settings.

  • Best Practice: A J2EE Blue-Print for a Typical Web App

    Consider a typical synchronous Struts-based Web application which does a simple DB search and post. What are some of the main patterns and components that should be used if following the �industry best practices�
    Does the following flow seem accurate?
    Strust Action creates a TransferObject , and passes it to a Business Delegate. Delegate finds the appropriate BusinessObject, the Business Object uses the Data Access Object�.the CRUD operation happens and the result is sent back to the Action in the same TransferObject.
    Which one of these components need an interface?
    What's the best way for this components to interact with each other (factory, etc.)?
    Message was edited by:
    kmkiani
    Message was edited by:
    kmkiani

    There are 3 tiers in a Java EE application. (Presentation, Business, Integration).
    The BusinessDelegate in this scenario would be a Presentation-tier business delegate. This guy would interact with a Session Facade who lives on the Business-tier. The SessionFacade is the abstraction on the Business-tier and the Business Delegate is the abstraction on the Presentation-tier. It is these guys that have direct communication. This design enables low coupling between the actual implementations of each area. If done properly, you could go from EJB to Web Service to POJO business models without ever having to change anything in the Presentation-tier.
    These object-oriented design patterns are primarily for Enterprise applications with extensive Quality-of-Service requirements.
    In your scenario, the Presentation-tier would contain a MVC-based web application, i.e. Struts. The business model and business/domain requirements would be implemented in the Business-tier.
    Presentation Tier - Struts Web Application
    Business Tier - (EJB | POJO | WEB SERVICES) Application
    Integration Tier - (Relational Database | File System | XML Database | EIS)

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best practice to use Time capsule for back-up of 3 different products (MBP 15 OSX lion, MBP 13 OSX lion and MBA 13 OSX mountain lion)?. Only the MBP15 is back-up regularly.

    When I want to save data of the MBA13 on mountain lion (wireless) with time capsule, hois there any best practice to perform?
    After that, assuming that data are back-up, can we easily differentiate data in time capsule belonging to MBP15/13 and MBA13?

    Unfortunately, Apple left off the Ethernet port....the most important port in networking....on the MBA, so your first backup of the entire Mac will need to be done using wireless.
    That may take a day or two unless your MBA has a Thunderbolt port on it in which case you could add a Thunderbolt to Ethernet adapter and connect the MBA to the Time Capsule for the first backup using an Ethernet cable.  It will probably only take 3-4 hours or less doing it this way.
    Once you have the first complete backup done, other subsequent backups can be done using wireless since they will ony take a few minutes, on average.
    Both Macs will backup to the Time Capsule using Time Machine automatically. Backups will be kept completely separate, so one Mac will normally only be able to "see" its own backups.

  • Best Practices in use of ABAP for SRM and/or CRM Configuration

    I was wondering if there is a document that defines best practices for the use of ABAP with the installation and customization of SRM and/or CRM. Such as amount of ABAP coding typically required, and best practices around the use of ABAP for customization and configuration.
    Thanks.

    Hi, Johnson
    Sorry, Please don't mind, you are not at right place to ask the Question like this
    Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!!
    Thanks and Regards,
    Faisal

  • Best Practices? Rendering to Flash for streaming web....

    I am always impressed with the flash based videos I see streaming on YouTube, FastCompany.Tv and other sites....
    My question... can you please either explain or point me in the right direction for streaming video best practices? Specifically, I am looking for info on best settings to produce the flash video (codecs and/or FCP render settings) and then what do people use as a flash player on their websites to show the end result.
    My goal is to create internal instructional videos for corporate training and then host them on my site (or streaming from Akamai). I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Examples of what I like, but I don't know how to do:
    http://www.fastcompany.tv/video/getting-government-work
    Thank you in advance for your expertise and insight.
    -Steven

    I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Use the H.264 setting for iPod in Compressor. The h.264 file will play in a JW Flash Player and it's able to be downloaded for iPod viewing.

  • Best practice in Infoprovider & Query design for access by BO Universe

    Hello Experts,
    Are there any best practices identified by practitioners or suggested by SAP for development of Infoprovider and queries for access by BO Universe.
    Best practices should be from the prospective of performance, design simplicity, adaptability to change etc.
    Appreciate your help.
    Regards,
    Pritesh.
    Edited by: pritesh prakash on Jul 19, 2010 10:51 AM

    Thanks Suresh.
    My project plan is to build Infocubes & queries which will be then used to build Universe upon it. Thus I am looking for do's & dont's while designing infocubes & queries such that there wont be any issues(performance or other) when accessed by Universe built on it.
    Hope I have made it more clear now.
    Regards,
    Pritesh.

Maybe you are looking for

  • Updating Zend Framework

    How can I update Zend Framework in Flash Builder 4? Flash Builder currently installs version 1.10.1 of the Framework, and I want to install version 1.10.7 I need to install the newest version of Zend Framework because I have a problem with Network Mo

  • Connecting to a projector

    Why can't I play a film I have bought via a projector using the Apple VGA Adapter. It keep saying I can;t use this for protected program. Really disappointed, why does Apple tie everything down. We can't all afford Apple TV etc.

  • TS1409 Please Help

    I can't get my "TV Out" setting to turn on. I've already tried resetting and restoring my ipod, please help.

  • Sequence of processes in WM and putaway labels use

    Hi all, I am working on WM module.I wanted to know the exact sequqnce of the foll processes in a WM module. GR Inbound delivery Transfer order creation Putaway process Also for the putaway process,why are the putaway labels printed???Are these labels

  • Collecting Exceptions from Threads in ThreadPoolExecutor

    Hi, I have a scenario where I am using a thread pool executor to submit threads. and then, I am waiting at the end for all threads to finish using below code. threadPoolExecutor.shutdown(); try { while (!threadPoolExecutor.awaitTermination(1000L, Tim