Best practice question: static constant keys

Hi- quick one here... just curious, I need a constant defined that I can use as a key into various NSDictionary instances and for other purposes also.
I tried defining this in my header file:
static NSString const *KEY_ID = @"ID";
But passing KEY_ID as the key in dictionary's setValue:forKey: function causes a warning about pointer types.
Basically, I need the equivalent of ...public static final String KEY_ID = "ID".... in Java. Any thoughts?

If you look at your headers, Apple's approach is generally:
extern NSString * const KEY_ID;
With a followup declaration/assignment in a relevant .m file.

Similar Messages

  • Best Practices Question: How to send error message to SSHR web page.

    Best Practices Question: How to send error message to SSHR web page from custom PL\SQL procedure called by SSHR workflow.
    For the Manager Self-Service application we’ve copied various workflows which were modified to meet business needs. Part of this exercise was creating custom PL\SQL Package Procedures that would gather details on the WF using them on custom notification sent by the WF.
    What I’m looking for is if/when the PL\SQL procedure errors, how does one send an failure message back and display it on the SS Page?
    Writing information into a log or table at the database level works for trouble-shooting, but we’re looking for something that will provide the end-user with an intelligent message that the workflow has failed.
    Thanks ahead of time for your responses.
    Rich

    We have implemented the same kind of requirement long back.
    We have defined our PL/SQL procedures with two OUT parameters
    1) Result Type (S:Success, E:Error)
    2) Result Message
    In the PL/SQL procedure we always use below construct when we want to raise any message
    hr_utility.set_message(APPL_NO, 'FND_MESSAGE_NAME');
    hr_utility.raise_error;
    In Exception block we write below( in successful case we just set the p_result_flag := 'S';)
    EXCEPTION
    WHEN APP_EXCEPTION.APPLICATION_EXCEPTION THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    WHEN OTHERS THEN
    p_result_flag := 'E';
    p_result_message := hr_utility.get_message;
    fnd_message.set_name('PER','FFU10_GENERAL_ORACLE_ERROR');
    fnd_message.set_token('2',substr(sqlerrm,1,200));
    fnd_msg_pub.add;
    p_result_message := fnd_msg_pub.get_detail;
    After executing the PL/SQL in java
    We have written some thing similar to
    orclStmt.execute();
    OAExceptionUtils.checkErrors (txn);
    String resultFlag = orclStmt.getString(provide the resultflag bind no);
    if ("E".equalsIgnoreCase(resultFlag)){
    String resultMessage = orclStmt.getString(provide the resultMessage bind no);
    orclStmt.close();
    throw new OAException(resultMessage, OAException.ERROR);
    It safely shows the message to the user with all the data in the page.
    We have been using this construct for a long time for all our projects. They are all working as expected.
    Regards,
    Peddi.

  • Best practice question -- copy container, assemble it, build execution plan

    So, this is a design / best practice question:
    I usually copy containers as instructed by docs
    I then set the source system parameters
    I then generate needed parameters / assemble the copied container for ALL subject areas present in the container
    I then build an execution plan JUST FOR THE 4 SUBJECT AREAS and build the execution plan and set whatever is needed before running it.
    QUESTION - When i copy the container, should i delete all not needed subject areas out of it or is it best to do this when building the execution plan? I am basically trying to simplify the container for my own sake and have the container just have few subject areas rather than wait till i build the execution plan and then focus on few subject areas.
    Your thoughts / clarifications are appreciated.
    Regards,

    Hi,
    I would suggest that you leave the subject areas and then just don't include them in the execution plan. Otherwise you have the possibility of running into the situation where you need to include another subject area in the future and you will have to go through the hassle of recreating it in your SSC.
    Regards,
    Matt

  • SAP Adapter Best Practice Question for Deployment to Clustered Environment

    I have a best practices question on the iway Adapters around deployment into a clustered environment.
    According to the documentation, you are supposed to run the installer on both nodes in the cluster but configure on just the first node. See below:
    Install Oracle Application Adapters 11g Release 1 (11.1.1.3.0) on both machines.
    Configure a J2CA configuration as a database repository on the first machine.
    Perform the required changes to the ra.xml and weblogic-ra.xml files before deployment.
    This makes sense to me because once you deploy the adapter rar in the next step it the appropriate rar will get staged and deployed on both nodes in the cluster.
    What is the best practice for the 3rdParty adapter directory on the second node? The installer lays it down with the adapter rar and all. Since we only configure the adapter on node 1, the directory on node 2 will remain with the default installation files/values not the configured ones. Is it best practice to copy node 1's 3rdParty directory to node 2 once configured? If we leave node 2 with the default files/values, I suspect this will lead to confusion to someone later on who is troubleshooting because it will appear it was never configured correctly.
    What do folks typically do in this situation? Obviously everything works to leave it as is, but it seems strange to have the two nodes differ.

    What is the version of operating system. If you are any OS version lower than Windows 2012 then you need to add one more voter for quorum.
    Balmukund Lakhani
    Please mark solved if I've answered your question, vote for it as helpful to help other users find a solution quicker
    This posting is provided "AS IS" with no warranties, and confers no rights.
    My Blog |
    Team Blog | @Twitter
    | Facebook
    Author: SQL Server 2012 AlwaysOn -
    Paperback, Kindle

  • SAP Adapter Best Practice Question for Migration of Channels

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

    I have a best practice question on the SAP adapter when migrating an OSB project from one environment (DEV) to another (QA).
    If my project includes an adapter channel that (e.g., Inbound SAP Proxy listening on a channel), how do I migrate that project to another environment if the channel in the target environment is different.
    I tried using the search and replace mechanism in the sbconsole, but it doesn't find the channel name in the jca and wsdl files.
    What is the recommended way to migrate from one environment to the other when the channel name changes?

  • Constants or properties file - best practice question

    Hi,
    My application has a number of values that will be used in different classes throughout my project. For example, I perform a check against the max allowed length of an ID in numerous places in my code.
    Therefore, it makes sense to set this value in a central location, and refer to it as a variable where it is required in my code.
    I see in other projects that using a public static final member in a Constants class is used to set these types of values. Is this recommended or best practice?
    The only alternative I can think of would be to use a properties file, and inject the value using Spring etc.
    What is considered best practice or the neatest way for doing this?
    Thanks

    user10340197 wrote:
    Thanks. I'm using Spring anyways, so it would provide me with the PropertyPlaceHolderConfigurer for injecting these.And the name of that class provides a clue. As Kayaman said, constants are constants. Math.PI does not and will not change, EVER. Neither will Integer.MAX_VALUE.
    Configuration parameters, on the other hand, might change. If your MAX_ID_LENGTH is ever likely to change, and could do so without causing widespread chaos, then it probably should be a property (ie, a configuration) value.
    If not, it should probably be a constant (with appropriate 60-point documentation warning people what might happen if they DO change it).
    Winston
    PS: There is nothing particularly terrible about having a Properties class (except that you'll want to call it something different) that initializes its values from configuration files; except that if there are gazillions of them, you might want to:
    (a) Split them up into "themes".
    (b) Re-think your design.
    Winston

  • Design Pattern / Best Practice Question

    Hi,
    I have been using Flex for a while now, but there is a
    scenario which I still have not found a solution I'm entirely happy
    with. I'm wondering if anyone else out there might have suggestions
    on a design pattern or best practice.
    Suppose I have a view which depends on model data which
    resides in some back end systems. That model data may or may not
    have been loaded (e.g. via a web service or remote object call) at
    the time the view is displayed.
    I don't know if the user will ever visit this part of the
    application so I would prefer to defer retrieval of the data until
    the user actually navigates to this view. Or I want to retrieve the
    data each time the view is displayed because the data is dynamic
    and could change between one presentation of the view and the next.
    Because the data comes from several systems, I cannot simply
    make one service call and display the view when it completes and
    all the data is available. I need to call several services which
    could complete in any order but I only want to display my view
    after I know all of them have completed and all of the model data
    is available. Otherwise, I can present the user an incomplete view
    (e.g. some combo boxes are empty until the corresponding service
    call to get the data completes).
    The solution I like best so far is to dispatch a single event
    (I am using Cairngorm) handled by a single command which acts as
    the caller and responder for all of the services. This command then
    remembers which responses it has received and dispatches another
    event to navigate to the view once all the results have returned.
    If the services being called are used in different
    combinations on different screens, this results in proliferation of
    events and commands. An event and command for each service and
    additional events and commands to bundle the services and the
    handling of their responses in the right combinations for each of
    the views.
    Another approach is to have some helper class listen for all
    of the model changes and only display the view when the model
    enters some state that is acceptable. It is sometimes difficult to
    determine just by looking at the model whether it is in the right
    state (e.g. how can I tell that a collection is the new collection
    that should just have been requested versus an old one lingering
    from a previous call). The logic required can get kind of
    convoluted and brittle.
    Basically, all of the solutions I've come up with so far seem
    less than ideal and a little hackish. I keep thinking there is some
    elegant solution out there that I am just missing ... but so far,
    no luck finding it. Thoughts?
    Thanks.
    Bill

    i think a service class is right - to coordinate your calls.
    i would have 1 event per call (so you could listen to individual
    responses if you wanted to).
    then i would use a flag. if you want to check for staleness,
    you would probably want two objects to map your service flag to
    lastRequested and lastCompleted. when you check, check if it's
    completed, and if it's not stale and that your lastRequested is
    less than lastCompleted (meaning that you're not currently waiting,
    i.e. you've returned since making a request). then make the request
    and update your lastRequested.
    here's a snippet of what i mean.
    ./paul
    public static const SVC1_LOADED:int = 1;
    public static const SVC2_LOADED:int = 2;
    public static const SVC3_LOADED:int = 4;
    public static const SVCALL_LOADED:int = 7;
    private var completedFlag:int = 0;
    then each call would have it's own callback.
    private function onSvc1Complete( evt:Event):void {
    completedFlag |= SVC1_LOADED;
    lastCompleted[ SVC1_LOADED ] = getTimer();
    dispatchEvent( new Event("svc1complete") );
    checkDone();
    private function checkDone():void{
    if( completedFlag == SVCALL_LOADED )
    dispatchEvent(new Event( "allLoaded" ));

  • Best Practice Question

    I have 3 Areas for my DWH
    The first area is Staging then validation and core
    Staging is just do load date from the source systems
    validation is to validate data (every city has to have a countrie ....)
    core is my DWH shema.
    The First step in ETL is to load the data from core to validation, let's say my GEO_DIM Dimension goes to Countries, Cities and Regions in core. Additionaly I build a CRC SUM when I downlaod from Core to Validation and store the CRC Checksum in a Staging table.
    The second step is to load target from the source systems to staging, but only those date that are non equal to the previous downloadet CRC schecksum, so only changed or new data going to staging.
    The third step is do load that new/changed data from staging to core and proof some dependences. It's just validation.
    My Question is, what is the best practic to bring three tables (Countries, cities and region) to one Dimension
    thanks and regards
    Andreas

    Andreas,
    I guess the correct is depends... Without kidding, are you planning to use a flat star table for this dimension? If that is the case you would be joining the sources together and loading this into the table.
    Now this sounds way to simple, so I guess there is something more to the question...
    Jean-Pierre

  • Best Practice to use one Key on ACE for new CSR?

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

    We generate multiple CSR on our ACE....but our previous network admin was only using
    one key for all new CSR requests.
    i.e.......we have samplekey.pem key on our ACE
    we use samplekey.pem to generate CSR's for multiple certs..
    is this best practice or should we be using new keys for each new CSR
    also .is it ok to delete old CSR on the lb..since the limit is only 8?..thx

  • Informatica and Essbase Best Practice questions

    We now have the Informatica adapter for Essbase installed and working. We have been able to get Informatica to upload data successfully. Now I have a few questions that I have not been able to find answers to in any documentation or forums for Informatica or Essbase. I have submitted these same questions to the Informatica Support but thought I would also post the questions here to see if many folks are using Informatica against Essbase.
    We are using:
    Informatica 8.6.1 (Linux)
    Essbase 11.1.1.3 (Windows 2003)
    1) I can see in Informtica that when we load data to Essbase (Target) it gives me the option to run a calc script AFTER it loads the data. However, if I need to run a Calc script BEFORE the load to Essbase (Target) what is the best practice? The work around I have found was to add the same session twice and for the 1st instance select the option to 'ONLY RUN THE CALC SCRIPT' on the mapping tab. The problem with this is the log shows that it will still run the query against the Source tables. This will impact run times and double to querying against the Source database. What is the Best Practice and proper way to build the workflow to Run a Calc Script BEFORE the load?
    2)Since you do not see the list of Calc Scripts for Essbase in Informatica (you have to manually type the Calc name), If I want to run the 'Default' calc for Essbase what is the syntax to run the 'Default' Calc Script? Tried 'Default' but didn't seem to work.
    3)I have other tasks in Essbase I want to do before actually having Informatica load the data. I would like to run the MAXL commands via a Command task. What is the Best Practice for doing this and the syntax to run MAXL commands in a Command Task in Informatica? I previously had Shell scripts built on the Informatica server that would be kicked off within Informatica, but we are trying to move away from shell scripts and instead have the scripting codes IN the workflows/sessions to make it easier to review the code and follow the logic, rather than having to find the scripts and open each of them.
    Any assistance you have with the two products working together I would GREATLY appreciate it!
    Robert

    AS i know addUser(User user){ ... } is much more useful for several reasons:
    1.Its object oriented
    2.its easy to write , because if Object has many parameters its very painful to write method with comma seperated parameters

  • What is the best practice for creating primary key on fact table?

    what is the best practice for primary key on fact table?
    1. Using composite key
    2. Create a surrogate key
    3. No primary key
    In document, i can only find "From a modeling standpoint, the primary key of the fact table is usually a composite key that is made up of all of its foreign keys."
    http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/logical.htm#i1006423
    I also found a relevant thread states that primary key on fact table is necessary.
    Primary Key on Fact Table.
    But, if no business requires the uniqueness of the records and there is no materilized view, do we still need primary key? is there any other bad affect if there is no primary key on fact table? and any benifits from not creating primary key?

    Well, natural combination of dimensions connected to the fact would be a natural primary key and it would be composite.
    Having an artificial PK might simplify things a bit.
    Having no PK leads to a major mess. Fact should represent a business transaction, or some general event. If you're loading data you want to be able to identify the records that are processed. Also without PK if you forget to make an unique key the access to this fact table will be slow. Plus, having no PK will mean that if you want to used different tools, like Data Modeller in Jbuilder or OWB insert / update functionality it won't function, since there's no PK. Defining a PK for every table is a good practice. Not defining PK is asking for a load of problems, from performance to functionality and data quality.
    Edited by: Cortanamo on 16.12.2010 07:12

  • Best Practice Question: Portal Sync Rules

    Hi,
    Is there any benefit in combining Inbound and Outbound Flows in a single Sync Rule?
    Or does it not matter if Inbound flows have their own Sync Rule and Outbound flows have their own Sync Rules?
    Does either option generate more/less EREs/DREs?
    Is either option better for performance reasons?
    look forward to your comments, thank you
    sk

    1:
    Only the OSR's create EREs, so if you have 4 outbound 3 inbound (assuming a user gets all 4 outbound SR's) the user should have 4 EREs. If the rules were combined into 3 outbound/inbound and 1 outbound, the user still gets 4 EREs.
    2:
    If my source was AD, I would create a new MV attribute called inAD. Then import a constant "true" from the target, AD for example. Then from my source, HR for example,  ill flow in a constant "false". Lastly update the MV attribute precedence for the
    inAD attribute to be AD then HR. From there i'd create a custom attribute in the portal so I can use the flag for set criteria.
    This might be a long winded way of getting it done, but say you have 10k users in AD and used DRE's. It means 10k more objects in the MV and in the portal. 
    Disclaimer: Not saying this is best practice, it's just what I do :) 

  • Best practice question for Bounded task flows

    We are new to Jdeveloper/ADF and I was wondering if we should always try to use a bounded task flow for our applications... is this considered the best way to develop an app? even the small single page ones?
    Thanks in advance.

    Hi,
    let me turn teh question around: How many tools do you have at home beside of a hammer ? In other words, its the problem you need to solve (and use cases are problems) that should determine the use of bounded task flows and its granularity. Note that for each use case the bounded task flow makes a good candidate for delivering it. Doesn't mean that every single page needs to go into its own task flow. For general bounded best practices have a look here
    http://www.oracle.com/technetwork/developer-tools/jdev/adf-task-flow-design-132904.pdf
    Frank

  • Bean best practice question

    Simple questions (hopefully), I know how to code this but just want some advice on the best way to do the following:
    1. User enters data into HTML form and submits
    2. Some Java at the backend grabs these details and emails them off somewhere
    I am thinking of doing the following, but what’s the best way?
    1. HTML form submits and data is sent directly to a JavaBean (FormBean.java)
    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?
    Just a bit confused on what’s best practice for this stuff?
    Thanks!

    2. FormBean.java contains standard getters/setters but also contains a method called sendMail(), is this bad practice? Do I need a second Bean sendMail.java? A better approach is this way
    a) Have all the form data in the form bean
    b) Write sendMail in a all together different class, as action.
    c) Send the form bean as a parameter to sendMail for processing and sending an email
    This way your sendMail() will become a kind of a service. Tomorrow you might have some other data, which you will have to send it in an email. In that case, you just reuse sendMail() method. Otherwise, if you have sendMail() in form bean itself, then if there are many form beans, then you would have to write sendMail() in every form bean, which is a bad practice. One principle of OOAD is to separate the functionality, which is redundant in your classes and make it as a separate module. If there are changes to the sendMail() functionality then, by having it in one module, you only have to change it at one place.
    Or is this completely the wrong way to do things i.e. should I do this entirely in a servlet with only 1 bean to grab the data (FormBean) and then access from servlet?You can have a servlet which acts like a controller, which receives the request parameters, constructs the form bean and invokes appropriate Action (in your case sendMail()). This is same as an MVC framework. Instead of you re-inventing the wheel to create a servlet controller, form bean, action etc. You could use one of the several MVC frameworks available in the market, such as Struts or Spring MVC.

  • Group by best practice question

    Consider this example:
    TABLE: SALES_DATA
    firm_id|sales_amt|d_date|d_data
    415|45|20090615|Lincoln Financial
    415|30|20090531|Lincoln AG
    416|10|20081005|AM General
    416|20|20080115|AM General Inc.
    I want the output to be grouped by firm_id with the sum of sales_amt and the d_data
    that corresponds to the latest d_date (i.e. max(d_date))
    Proposed query:
    select firm_id, sum(sales_amt) total_sales, substr(max(d_data), instr(max(d_data), '~') + 1) firm_name from (
    select firm_id, sales_amt, d_date || '~' || d_data from sales_data
    group by firm_id
    output is as expected:
    firm_id|total_sales|firm_name
    415|75|Lincoln Financial
    416|30|AM General
    I know this works but my QUESTION is: is there a better way to do this and is the above approach to concatenate columns when you want to aggregate multiple columns against any best practices.
    Thanks very much!

    Here's a way that uses analytics (I just like them):
    SQL> select * from sales_data;
                 FIRM_ID            SALES_AMT D_DATE               D_DATA
                     415                   45 15-JUN-2009 00:00:00 Lincoln Financial
                     415                   30 31-MAY-2009 00:00:00 Lincoln AG
                     416                   10 05-OCT-2008 00:00:00 AM General
                     416                   20 15-JAN-2008 00:00:00 AM General Inc.
    SQL> select firm_id, sum_amt, d_data
      2  from
      3  (
      4     select firm_id, d_data
      5           ,sum(sales_amt) over (partition by firm_id) sum_amt
      6           ,row_number() over (partition by firm_id order by d_date desc) rn
      7     from   sales_data
      8  )
      9  where rn = 1
    10  ;
                 FIRM_ID              SUM_AMT D_DATA
                     415                   75 Lincoln Financial
                     416                   30 AM General

Maybe you are looking for

  • IMovie won't open since i got the update for Yosemite

    ever since i updated the mac book pro to Yosemite - it has been slower and also iMovie won't open at all

  • Authenticated on ISE 1.2 (as admin) against an external radius server

    Hello Our customer wants to be authenticated on ISE 1.2 (as admin) against an external radius server (like ACS not microsoft). How could i do that ? Is it possible while retaining internal admin users database in a sequence "external_radius or intern

  • Shared server config issues

    Dear experts, Below is my environment details: DB: Oracle 10gR1 Base (10.2.0.1) OS: RHEL 5 I have configured shared servers as shown below: 1) Set the following parameters dispatchers='(protocol=tcp)(dispatchers=2)' max_dispatchers=10 shared_servers=

  • Date format in planning application

    Hi, By default, the date format of a planning application is MM-DD-YY. In my application I change it to DD-MM-YY. But till now, when I am trying to enter a date in a data form of this application, also the date format is MM-DD-YY. I want to change th

  • Sync BlackBerry 8800 with MacBook

    I'm using MacBook running 10.4.9. Also, I'm using BlackBerry 8800. Now, I have tried both PocketMac & Missing Sync for BlackBerry, but both didn't work. I managed to sync with iCal, but not Address Book. Can anyone help me? I want to know if the USB