Best Practice Mail Authentication not really possible?

Hi All,
In an effort to clamp down on my security a bit better, I've decided to try and remove all possible Mail auth methods besides Kerberos, Cram-MD5 and APOP. In other words, no Login, PLAIN or Clear.
I have my own Certificate Authority that I give to my users and secure IMAP, POP and SMTP all work well. I've even turned on the submission port (587).
Now, I was hoping that I could have an environment where Login, Plain and Clear are ALL disabled, but still permitted IF done over SSL. I don't see any way of achieving this.
SO, I set my machine to REQUIRE SSL. While this is somewhat satisfactory for IMAP and POP, this cannot be done for SSL as it would then require all external sending mail servers to speak with my server over SSL, which next to none are willing to do.
Last but not least, webmail of course now chokes. I've set it to use port 993 and use SSL but as I'm sure some have guessed, my certificate's common name is not "localhost" and my server is behind a NAT router, so to get webmail to work traffic would have to be routed out my network to the router and back in, otherwise the proper SSL host name doesn't match.
All in all, it's quite a pain!!!
Here's what I'd LIKE to see possible:
1. Support Cram-MD5 and Kerberos from any IP with or without SSL. This will enable webmail and modern email clients to work.
2. Support Clear ONLY IF IT IS VIA SSL ("plaintext + TLS" as my logs refer to it). This will enable Treos, PCs running Outlook [Express]. and other non-cram-md5 devices to work WITHOUT compromising on security
3. Reject Clear, Login and Plain IF IT IS NOT VIA SSL.
Is this possible?

There is no way to ensure you users are completely unable to send authorization in the clear. You can only take steps to minimize the potential risks. Here are my thoughts.
Again, my answer is sendmail specific but hopefully that points you to what to look for in postfix.
In the m4 config file for sendmail there is the following:
dnl # The following allows relaying if the user authenticates, and disallows
dnl # plaintext authentication (PLAIN/LOGIN) on non-TLS links
dnl #
define(`confAUTH_OPTIONS', `A p')dnl
Lines with dnl at beginning are basically comments.
What the above says that defining this option you are going to allow plain text logins but only if the connection is first encrypted with TLS/SSL. Undocumented in the comments, if you delete the "p" then you can have PLAIN text without TLS.
I was actually testing this out this weekend just doing a verification on my server. The results:
1. No authentication no ssl - Mail.app rejected. Log message on server stated relaying not allowed.
2. Authentication, no ssl - Mail.app just kept asking me for my password. Server log file showed multiple entries of what amounts to a client connect/disconnect with no traffic.
3. Authentication, plus ssl - Message sent immediately.
Now, I did not test for number 2 whether the client was actually sending the password to the server and the server was just ignoring. I'm concerned about protecting passwords but more concerned about preventing my server from becoming an open relay. The password may have indeed left the client and traversed the network in the clear.
Optional solutions to take to prevent users from harming themselves.
1. VPN and two smtp servers. One smtp server that receives mail from the world. One smtp server that only can be connected to via VPN tunnel. VPN smtp server then uses the exposed smtp server as its upstream provider, (I forget the term). This assumes the user remembers to start the VPN before the email client. Without the vpn running, then you run into the possibility I mentioned in #2 above. Maybe there is a setting that could enforce vpn before sending.
2. https webmail. Only allow access to email via web interface. SMTP authentication is not an issue then since you can have the localhost MTA of the webserver handle sending.
3. Managed accounts of some sort. So users couldn't turn off ssl auth.
Just some thoughts that I hope provide some ideas for you.
Cheers
- Mark

Similar Messages

  • Cisco ISE: 802.1x Timers Best Practices / Re-authentication Timers [EAP-TLS]

    Dear Folks,
    Kindly, suggest the best recommended values for the timers in 802.1x (EAP-TLS)... Should i keep default all or change or some of them?
    Also, what do we need reauthentication timers? Any benefit to use it? Does it prompt to users or became invisible? and What are the best values, in case if we need to use it?
    Thanks,
    Regards,
    Mubasher
    My Interface Configuration is as below;
    interface GigabitEthernet1/34
    switchport access vlan 131
    switchport mode access
    switchport voice vlan 195
    ip access-group ACL-DEFAULT in
    authentication event fail action authorize vlan 131
    authentication event server dead action authorize vlan 131
    authentication event server alive action reinitialize
    authentication open
    authentication order dot1x mab
    authentication priority dot1x mab
    authentication port-control auto
    mab
    snmp trap mac-notification change added
    dot1x pae authenticator
    dot1x timeout tx-period 5
    storm-control broadcast level 30.00
    spanning-tree portfast
    spanning-tree bpduguard enable

    Hello Mubashir,
    Many timers can be modified as needed in a deployment. Unless you are experiencing a specific problem where adjusting the timer may correct unwanted behavior, it is recommended to leave all timers at their default values except for the 802.1X transmit timer (tx-period).
    The tx-period timer defaults to a value of 30 seconds. Leaving this value at 30 seconds provides a default wait of 90 seconds (3 x tx-period) before a switchport will begin the next method of authentication, and begin the MAB process for non-authenticating devices.
    Based on numerous deployments, the best-practice recommendation is to set the tx-period value to 10 seconds to provide the optimal time for MAB devices. Setting the value below 10 seconds may result in the port moving to MAC authentication bypass too quickly.
    Configure the tx-period timer.
    C3750X(config-if-range)#dot1x timeout tx-period 10

  • Looking for best practice Port Authentication

    Hello,
    I'm currently deploying 802.1x on a campus with Catalyst 2950 and 4506.
    There are lots of Printers and non-802.1x devices (around 200) which should be controlled by their mac-address. Is there any "best practice" besides using sticky mac-address learning.
    I'm thinking of a central place where alle mac-addresses are stored (i.e. ACS).
    Another method would be checking only the first part of the mac-address (vendor OID) on the switch-ports.
    Any ideas out there??
    regards
    Hubert

    check out the following link, this provides info on port based authentication, see if it helps :
    http://www.cisco.com/en/US/products/hw/switches/ps628/products_configuration_guide_chapter09186a00801cde59.html

  • Lync2013 Best Practices Analyzer can not scan the edge server details

    Hi All,
    I encount one strange question that the Lync 2013 Best Practices Analyzer tool can find there's one edge server in the lync infrastructure when scanning, but the scan result does not display the edge server details as front end server (front end server can
    scan all details like hardware CPU, fqdn and so on. But the edge server has not)
    Anyone can help, much appreciated.
    Elva

    It seems a network issue.
    You should check you have the proper network access to Lync Edge Server as Lync Edge Server is not in the same subnet of Lync Front End Server.
    Lisa Zheng
    TechNet Community Support

  • Need Best practice (?) : authentication: Visual Studio - SQL 2008

    Our applications connect to our sql DBs using  sql authentication and a single account (per database). Downside is that the DB password is hardcoded in the app config files (so it is known to devs) and is rarely changed.  Here are
    two options I'm considering. What is you take on them?
    1. Use Windows authentication and AD groups. I presume that passwords would not need to be hardcoded in config files in this arrangement and that is an improvement. The downside is that users (while they should not normally have access to Management
    Studio) now have, in theory, direct
    access to the databases. That's why we used the single user sql authentication approach described above.   Also, in the past when we tried to do this we had issues when users belonged to multiple groups. We could not determine their Default
    DB.
    2. For all application to DB access only use DSNs (Data Source Name) and a sql account (per database).  (I don't want to use a Windows account in a DSN. To prevent expiration,  I'd be running around like crazy trying to update the DSN password
    before the apps breaks.)
    Your thoughts, please!
    TIA,
    edm2

    see
    http://msdn.microsoft.com/en-us/library/bb669066(v=vs.110).aspx
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Network Users with network homes not really working for me

    I have with great pain setup a OS X Lion server on a Mac Mini that was supposed to be my central server to have 4 network users accounts and all the users data is stored on an external disk array with mounted network homes to the 2 iMacs and 2 Macbooks I have in my home.
    I have gotten it all working and all my Macs are joined to the Open Directory and each User can login as a network users on any of the Macs and get their files via mounted home directory from the server. The home directories on the server are backed up with Time Machine.
    I have found the following items that do not work proberly:
    1) Desktop backgrounds settings are just lost sometimes for whatever reason. Desktop background goes to default and you need to manually set back to the one you have selected. This happen mostly if users have their own desktop pictures.
    2) Keychain get's screwed up. The user often get the "Keychain doesn't exist to store ..." message and need to select to reset the keychain. Anything I have tried from "Keychain First Aid" to removing and have a new one created doesn't fix the problem. It keeps on coming back.
    3) iTunes Storage and AppStore are getting confused about authorized computers. This is because a user logs in from another computer and then iTunes store would tell the user this computer hasn't been authorized to play the purchase music. Same happens with iPhone apps from the AppStore. Apparently those two stores are not setup to hanlde network users proberly.
    4) Permission issues happen sometimes in Application like iPhoto where it would complain not being able to see photos or cannot add new photos to the library. Need to run a permission repair on the Iphoto Library to fix this.
    5) One critical one is that it's not really possible to restore files from Time Machine. The Time Machine backup is done on the server by an administrator account directly backing up the user directories. When you go into Time Machine on the server even the admin can drill down into the user directories so no restore possible. The individual users have no idea that there was ever a Time Machine backup done as Time Machine is not setup in their accounts on the individual Macs. This prevents any possible restore.
    I reckon that many of the problems are related to having only one location for ~/Library as the individual Macs are writing their user related settings into this directory in a central location. So what happens is when something on iMac 1 and then I log in on iMac 2 that might not exactly match this Macs config and it get's confused throwing one of the above erors.
    Trouble is witth central network home directory the way they mounted i can't exclude the ~/Library folder. The only option I can see is mobile account because I have seen in the preferences that when they sync the handle Library items differently.
    Does anybody have any experience out there with this sort of thing and can advise what's the best way forward?
    If i can't resolve this I'll go back to have network users with local home directories on each Mac and just setup for each user a network share to which they copy files if they want them available on other Macs. Not as nice but at least it works!
    As a said note I did this to make things easier but it has up to now cost me trouble then i had before!

    Haven't heard anything from anybody so probably to daunting a topic ...
    I have now moved on to try Portable Home Directories (PHD) and syncing ... what a disaster!
    First it took  me ages to get this right as the way the home directores are mounted on the clients from the server it's just weired which has to do with how AFP mounts are implemented. Since one AFP mount can't be mounted by several users on the same system they use a workaround of mounting it to a temp directory and then linking it back to where it should be. Of course this causes major problems.
    Okay it kind of worked so let's move on to syncing PHD. First of all on initial creation it only sync a small portion of the directory that should be okay but on some of my accounts it never went passed this stage. It said it's all synced but it only had synced the first 10% or so of the data. I wasn't able to make it sync anymore.
    On other accounts it correctly synced all the data down, or so I thought. Apparently a few sync session back and forth and 50-60% of the data was gone. On further investigation it turned out to be iTunes and iPhoto libraries. Turns out does don't sync probably via Home Sync!!!
    Apple product is not able to probably syncing Apple specific library files!!!!
    So here my warning to everybody: DO NOT USE PHD and HOME SYNC to sync your data as you will lose stuff if you have iTunes and/or iPhoto libraries with Lion OS X Server!!!
    The whole Lion Server experience has been a disaster for me. Now I have a server that does file sharing and time machine backup sharing. I can do the same thing with a standard Mac using those services. What's the point of Lion Sever for Home if nothing works proberly?

  • Idoc processing best practices - use of RBDAPP01 and RBDMANI2

    We are having performance problems in the processing of inbound idocs.  The message type is SHPCON, and transaction volume is very high.  I am a functional consultant, not an ABAP developer, but will try my best to explain our current setup.
    1)     We have a number of message variants for the inbound SHPCON message, almost all of which are set to trigger immediately upon receipt under the Processing by Function Module setting.
    2)      For messages that fail to process on the first try, we have a batch job running frequently using RBDMANI2.
    We are having some instances of the RBDMANI2 almost every day which get stuck running for a very long period of time.  We frequently have multiple SHPCON idocs coming in containing the same material number, and frequently have idocs fail because the material in the idoc has become locked.  Once the stuck batch job is cancelled and the job starts running again normally, the materials unlock and the failed idocs begin processing.  The variant for the RBDMANI2 batch job is currently set with a packet size of 1 and without parallel processing enabled.
    I am trying to determine the best practice for processing inbound idocs such as this for maximum performance in a very high volume system.  I know that RBDAPP01 processes idocs in status 64 and 66, and RBDMANI2 is used to reprocess idocs in all statuses.  I have been told that setting the messages to trigger immediately in WE20 can result in poor performance.  So I am wondering if the best practice is to:
    1)     Set messages in WE20 to Trigger by background program
    2)     Have a batch job running RBDAPP01 to process inbound idocs waiting in status 64
    3)     Have a periodic batch job running RBDMANI2 to try and clean up any failed messages that can be processed
    I would be grateful if somebody more knowledgeable than myself on this can confirm the best practice for this process and comment on the correct packet size in the program variant and whether or not parallel processing is desirable.  Because of the material locking issue, I felt that parallel processing was not desirable and may actually increase the material locking problem.  I would welcome any comments.
    This appeared to be the correct area for this discussion based upon other discussions.  If this is not the correct area for this discussion, then I would be grateful if the moderator could re-assign this discussion to the correct area (if possible) or let me know the best place to post it.  Thank you for your help.

    Hi Bob,
    Not sure if there is an official best practice, but the note 1333417 - Performance problems when processing IDocs immediately does state that for the high volume the immediate processing is not a good option.
    I'm hoping that for SHPCON there is no dependency in the IDoc processing (i.e. it's not important if they're processed in the same sequence or not), otherwise it'd add another complexity level.
    In the past for the high volume IDoc processing we scheduled a background job with RBDAPP01 (with parallel processing) and RBDMANIN as a second step in the same job to re-process the IDocs with errors due to locking issues. RBDMANI2 has a parallel processing option, but it was not needed in our case (actually we specifically wouldn't want to parallel-process the errors to avoid running into a lock issue again). In short, your steps 1-3 are correct but 2 and 3 should rather be in the same job.
    Also I believe we had a designated server for the background jobs, which helped with the resource availability.
    As a side note, you might want to confirm that the performance issues are caused only by the high volume. An ABAPer or a Basis admin should be able to run a performance trace. There might be an inefficiency in the process that could be adding to the performance issue as well.
    Hope this helps.

  • Best practice Warnings in OBIEE 10G

    Hi
    Today i suddenly checked in my RPD that tere are 250 best practice warnings. Most are related to keys not defined etc.
    I want to know what is the impact of best practice warnings. as to resolve 250 warnings, i will hve to invest lot of time.
    If its helpful and do somethign good to OBIEE dashboards etc, then only i will be spening more time to this.
    Please suggest.

    Hi,
    Warnings may not be harmful to your code but perfect code may give you the best performance.Defining keys is the best practice becos it will increase the query performance.May be you didnt define keys on unused objects ,so the unused stuff in the RPD may degrade the performance.Anyway we are not always follow the best practices but do which is possible.
    mark if helpful/correct...
    thanks,
    prassu

  • Best Practices on OWB/ODI when using Asynchronous Distributed HotLog Mode

    Hello OWB/ODI:
    I want to get some advice on best practices when implementing OWB/ODI mappings to handle Oracle Asynchronous Distributed HotLog CDC (change data capture), specifically for “updates”.
    Under Asynchronous Distributed HotLog mode, if a record is changed in a given source table, only the column that has been changed is populated in the CDC table with the old and new value, and all other columns with the exception of the keys are populated with NULL values.
    In order to process this update with an OWB or ODI mapping, I need to compare the old value (UO) against the new value (UN) in the CDC table. If both the old and the new value are NOT the same, then this is the updated column. If both the old and the new value are NULL, then this column was not updated.
    Before I apply a row-update to my destination table, I need to figure out the current value of those columns that have not been changed, and replace the NULL values with its current value. Otherwise, my row-update will replace with nulls those columns that its value has not been changed. This is where I am looking for an advise on best practices. Here are the possible 2 solutions I can come up with, unless you guys have a better suggestion on how to handle “updates”:
    About My Environment: My destination table(s) are part of a dimensional DW database. My only access to the source database is via Asynchronous Distributed HotLog mode. To build the datawarehouse, I will create initial mappings in OWB or ODI that will replicate the source tables into staging tables. Then, I will create another set of mappings to transform and load the data from the staging tables into the dimension tables.
    Solution #1: Use the staging tables as lookup tables when working with “updates”:
    1.     Create an exact copy of the source tables into a staging environment. This is going to be done with the initial mappings.
    2.     Once the initial DW database is built, keep the staging tables.
    3.     Create mappings to maintain the staging tables using as source the CDC tables.
    4.     The staging tables will always be in sync with the source tables.
    5.     In the dimension load mapping, “join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    6.     For “updates”, use the staging tables as lookup tables to get the current value of the column(s) that have not been changed.
    7.     Apply the updates in the dimension tables.
    Solution #2: Use the dimension tables as lookup tables when working with “updates”:
    1.     Delete the content of the staging tables once the initial datawarehouse database has been built.
    2.     Use the empty staging tables as a place to process the CDC records
    3.     Create mappings to insert CDC records into the staging tables.
    4.     The staging tables will only contain CDC records (i.e. new records, updated records, and deleted records)
    8.     In the dimension load mapping, “outer join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    5.     For “updates”, use the dimension tables as lookup tables to get the current value of a column(s) that has not been changed.
    6.     Apply the updates in the dimension tables.
    Solution #1 uses staging tables as lookup tables. It requires extra space to store copies of source tables in a staging environment, and the dimension load mappings may take longer to run because the staging tables may contain many records that may never change.
    Solution #2 uses the dimension tables as both the lookup tables as well as the destination tables for the “updates”. Notice that the dimension tables will be updated with the “updates” AFTER they are used as lookup tables.
    Any other approach that you guys may suggest? Do you see any other advantage or disadvantage against any of the above solutions?
    Any comments will be appreciated.
    Thanks.

    hi,
    can you please tell me how to make the JDBC call. I triedit as:
    1. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "jdbc:oracle:thin");
    and
    2. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "thin");
    -as given in http://www.acs.ilstu.edu/docs/oracle/server.101/b10785/jm_opers.htm#CIHJHHAD
    The 1st one is giving the error:
    Caused by: oracle.jms.AQjmsException: JMS-135: Driver jdbc:oracle:thin not supported
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:330)
    at oracle.jms.AQjmsTopicConnectionFactory.<init>(AQjmsTopicConnectionFactory.java:96)
    at oracle.jms.AQjmsFactory.getTopicConnectionFactory(AQjmsFactory.java:240)
    at com.ivy.jms.JMSTopicDequeueHandler.init(JMSTopicDequeueHandler.java:57)
    The 2nd one is erroring out:
    oracle.jms.AQjmsException: JMS-225: Invalid JDBC driver - OCI driver must be used for this operation
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:288)
    at oracle.jms.AQjmsConsumer.dequeue(AQjmsConsumer.java:1307)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1028)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:951)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:929)
    at oracle.jms.AQjmsConsumer.receive(AQjmsConsumer.java:781)
    at com.ivy.jms.JMSTopicDequeueHandler.receiveMessages(JMSTopicDequeueHandler.java:115)
    at com.ivy.jms.JMSManager.run(JMSManager.java:90)
    at java.lang.Thread.run(Thread.java:619)
    Is anything else beyond this is required??? please help. :(
    oracle: 10g R4
    linux environment and java is trying to do AQjmsFactory.getTopicConnectionFactory(...); Java machine is diffarent from the database and no oracle client is to be installed on java machine.
    The same code is working fine when i use oc8i instead of thin drivers and run it on db machine.
    ravi

  • Best-Practice Best-In-Class Examples

    I am new to flash and have found a few articles on best
    practices and find them useful. I would like to find some
    repository or site that refrerences outstanding Flash appications,
    sites, or objects. I want to learn from the best. I am not
    specifically interested in getting the code to the Flash, although
    that would obviously be helpful. I would like to see what people
    are doing out there and then I can try to apply it to what I'd like
    to do.
    Thanks for the help
    Rich Rainbolt

    Hi,
    Warnings may not be harmful to your code but perfect code may give you the best performance.Defining keys is the best practice becos it will increase the query performance.May be you didnt define keys on unused objects ,so the unused stuff in the RPD may degrade the performance.Anyway we are not always follow the best practices but do which is possible.
    mark if helpful/correct...
    thanks,
    prassu

  • Best Practice in V7.0 : Issues with Sales Planning and Reporting

    I am trying to install the SAP Best Practices for BPC 5.1 on SAP PBC 7.0 SP 04 I have done this as I cannot find any Best Practice documents for version 7 as yet.
    I have managed to get through the Administration setup and most of the BPC -Administration Configuration Guide, however I am having a problem with 7.4 Running a Data ManagementPackage - Import on page 32 of 36. This step involves you uploading a data file Demo_Revenue_Data.txt into BPC.
    The file says that it has failed due to Ínvalid dimension ACCOUNT in lookup.
    I believe that this error may be driven by a previous step 6.4 Creating Script Logic where the logic for BP_Sales Application was required.
    My question is twofold in that I need to determine:
    1. Has anyone else tried the BestPractices for BPC 5.0 in BPC 7.0?
    2. Does anyone know how to overcome the error when uploading the Demo Revenue into BPC?
    Edited by: Kevin West on Jul 8, 2009 2:03 PM

    Hi,
    BPC best practices document from 5 is working fine also for 7.0 because 7.0 is just an update for 5.x.
    Running Import involve logic just if you are running the package with option enabled (Run Default Logic).
    Your issue seems to be related to maping which means you have to check Transformation and Conversion file.
    Any way the best practices document will not provide you information about how to build Transformation and Conversion files.
    You have to follow an SAP BPC training and that it will help you to build your applicatioon easier and faster.
    Regards
    Sorin Radulescu

  • Best Practice for Resolving OAS 10g R3 Classloading Issues

    What's the best practices for eliminating classloading issues for shared libraries that are loaded by default (apache.commons.logging, oracle.toplink, etc) in OAS 10g R3?
    So far it looks like my options are to exclude the conflicting JARs in my deployed applications or manually remove the entries from the application.xml and system-application.xml files in the OC4J instance config directory.
    I know that I can override the shared libraries loaded from the system-application.xml by using the <web-app-class-loader search-local-classes-first="true"/> element in my orion-web.xml but is that the best practice? Also note that this solution does not override the apache.commons.logging shared library loaded from the container's application.xml.
    So what is the best practice?

    What's the best practices for eliminating classloading issues for shared libraries that are loaded by default (apache.commons.logging, oracle.toplink, etc) in OAS 10g R3?
    So far it looks like my options are to exclude the conflicting JARs in my deployed applications or manually remove the entries from the application.xml and system-application.xml files in the OC4J instance config directory.
    I know that I can override the shared libraries loaded from the system-application.xml by using the <web-app-class-loader search-local-classes-first="true"/> element in my orion-web.xml but is that the best practice? Also note that this solution does not override the apache.commons.logging shared library loaded from the container's application.xml.
    So what is the best practice?

  • Is hard-coded subview sizing brittle or best practice?

    Coming from a web background, where explicit sizing is generally not considered best practice, I am not used to 'hardcoding' positioning values.
    For example, is it perfectly acceptable to create custom UITableViewCells with hardcoded values for subviews' frames and bounds? It seems that when the user rotates the device, I'd need another set of hardcoded frames/bounds? If Apple decides to release a larger TABLET ... then I'd need yet another update to the code - it sounds strange to me in that it doesn't seem to scale well.
    In my specific case, I'd like to use a cell of style UITableViewCellStyleValue2 but I need a UITextField in place of the detailTextLabel. I hate to completely write my own UITableViewCell but it does appear that in most text input examples I've found, folks are creating their own cells and hardcoded sizing values.
    If that is the case, is there any way I can adhere to the predefined positioning and sizing of the aforementioned style's textLabel and detailTextLabel (ie: I'd like to replace or overlay my UITextField in place of the detailTextLabel but yet have all subview positioning stay in tact)? Just after creating a default cell, cell.textLabel.frame is returning 0 so I assume it doesn't get sized until the cell's 'layoutSubviews' gets invoked .. and obviously that is too late for what I need to do.
    Hope this question makes sense. I'm just looking for 'best practice' here.
    Message was edited by: LutherBaker
    Message was edited by: LutherBaker

    I think devs will be surprised at the flexibility in their current apps when/if a table it released. I'm of the opinion that little energy will be needed to move existing apps to a larger screen if at all. Think ratio, not general size.
    In terms of best practice...hold the course and let the hardware wonks worry about cross-over.

  • Best practice on storing the .as and .mxml files

    I have some custom components, and they use their own .as
    action script files. The custom components are placed in the
    "src/component" folder right now. Should I place the associated .as
    files in the same "src/component" folder? What is the suggested
    best practices?
    Thanks,

    Not quite following what you mean by "associated .as files ",
    but yes, that sounds fine.
    Tracy

  • Portal Best Practices

    Hi folks,
    I've seen this link to SAP Portal Best Practices:  [http://help.sap.com/bp_epv260/ep_en/index.htm|http://help.sap.com/bp_epv260/ep_en/index.htm]
    here in SDN but it isn't working (The page cannot be found - HTTP Error 404 - File or directory not found.
    ). Has this content been moved to another place?
    I found this one http://help.sap.com/bp_epv170/EP_US/HTML/Portals_intro.htm . Is it an older version?
    What is the latest version of SAP Portals Best Practices? Where can I find all the scenarios available?
    Thanks in advance,
    Geraldo.

    Geraldo,
    check these links
    The specified item was not found.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/519d369b-0401-0010-0186-ff7a2b5d2bc0
    With SAP Best Practices, failure is not an option!
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/e5b7bb90-0201-0010-ee89-fc008080b21e
    Thanks
    Bala Duvvuri

Maybe you are looking for

  • HT1338 how much does the glass for a screen cost on a macbook pro

    Trying to figure out what the cost is before wasting time going to tech support at an apple store and them having my laptop for 2-5 weeks.

  • I was excited to take my new Ipad mini on a trip...but...

    My pictures look GREAT for the most part on the ipad but when I had them (many of them) processed, they are terrible. The colors are off and they are grainy.  Does anyone else have this problem?  There were some beautiful photos, taken in Portugal, b

  • Creating Data Source

    Hi Experts, While creating datasource,i am assigning it a view ( ZV_ZFT_CASHFLOW ) but following is error: Invalid extract structure template ZV_ZFT_CASHFLOW of DataSource ZFT_CASHFLOW      Message no. R8359 Diagnosis     You tried to generate an ext

  • Multiple JDS as a name serivce question

    I have three JDS (5.2) running as a naming service for host and user authentication: one master, and two slaves. My problem is that the ldap servers themselves point to another ldap server for information. So when I take one server down (patching) ev

  • Creating dummy prompt in Web Intelligence

    Hi, In a report that I am creating, I need to get an input from user to calculate the running total. Is there a way to create a dummy prompt or something where user can provide his input. The actual scenario is that we need to calculate the total sal