Best Practice Problems

Trying to be a good developer :) and so i read many articles on the best way to do things and try to adopt these in my developments.
For my recent JSF developments this has included dropping JSTL and trying to implement everything in 'pure JSF' way.
And from a HTML point of view, this has meant styling using CSS, and not resorting to complex tables to create layout, but again using CSS to create the layout.
This is where i have my problems.
I have some content like
     <div class="row">
     <span class="rightj15">
     <h:outputText value="User name:"/>
     </span>
     <span class="leftj2">
     <h:inputText id="username" value="#{userDetailsBean.user.displayName}" disabled="true" styleClass="inputfield"/>
     </span>
     <span class="leftj3">
     <h:message for="username" errorClass="errorMessage"/>
     </span>
     </div>
This is one 'row' of output which is styled and layed out nicely. Other rows are also displayed showing other things.
The problem is that some rows are conditional, i.e. only should be displayed for certain types.
I can control the rows with a panelGgroup,
     <h:panelGroup rendered="#{userDetailsBean.user.displayName}">
     <div class="row">
     <span class="rightj15">
     <h:outputText value="User name:"/>
     </span>
     <span class="leftj2">
     <h:inputText id="username" value="#{userDetailsBean.user.displayName}" disabled="true" styleClass="inputfield"/>
     </span>
     <span class="leftj3">
     <h:message for="username" errorClass="errorMessage"/>
     </span>
     </div>
     </h:panelGroup>
The problem here is that although the panelGroup means that the content is not rendered, the layout is still present (in terms of the divs and spans). This is a problem because it means i get a blank row when the page is displayed in the browser.
The only way i can currently see to get round this is to either..
a) Use <c:if> to do the conditional rendering, loose my pure JSF solution, and i have to make sure my backing beans are already created etc. etc.
b) Drop my CSS layout and use tables instead.
Does anybody have any other ideas?

Well, i tried what you suggested, but this didn't help, just screwed up the layout.
As i understand it, f:verbatim is used to encapulate non jsf template and action elements...

Similar Messages

  • SAP Best Practice: Problems with Loading of Transaction Data

    Hi!
    I am about to implement SAP Best Practices scenario "B34: Accounts Receivable Analysis".
    Therefore I load the data from SAP ERP IDES system into SAP NetWeaver 2004s system.
    My problems are:
    when I try to load the Transaction data for Infosources 0FI_AR_4 and 0FI_AR_6 I get the following errors/warnings:
    when I start the "schedule process" the status getting still "yellow 14:27: 31(194 from 0 records)"
    On the right side I see some actions that are also "yellow", e.g. "DataStore Activation (Change Lo ): not yet activated".
    As a result I cannot see any data in tcode "RSRT" executing the queries "0FIAR_C03/...".
    The problems there
    1) Input help of the web template of query don't contain any values
    2) no data will be shown
    Can some one help me to solve this problem?
    Thank you very much!
    Jürgen

    Be in the monitor window where u got the below issue
    when I start the "schedule process" the status getting still "yellow 14:27: 31(194 from 0 records)"
    and go to environment in the menu options TransactRFC--->in the sourcesystem...
    give the logon details and enter and from there give the correct target destination as ur BI server and execute...
    if u find some idoc's pending there push that manually using F6..
    and come back to ur load and refresh....
    if still it doen't turn green u can manully change status to red in STATUS tab and come to processing tab and expand ur processing details and right click on ur data packet which was not yet updated and select manual update...
    it shows busy status and when it comes out of that once again refresh...
    rgds,

  • Best Practice on Knowledge Management, IS01 Problems and Solutions

    Been Playing with KM and looking for insight from other users (will give points) using it for ICWC.
    We have mulitple product lines where we have documents with Q & A's in each line.  As I look at moving that into CRM via IS01, I am looking for any Best Practices or recommendations.
    1. Create a single problem and solution for every question.
    2. Create a single problem (list all questions)  for every product line and create multiple solutions that are linked to that problem (solution for each question)
    3. Is LMSW a good tool for loading data in mass?
    The ICWC search brings back the 1st line on the problem & solution on the screen, meaning I try to limit the characters used so it fits on the ICWC screen, a long 1st line on the problem doesn't allow the agent to see enough of the solution without opening the link.
    Thanks,
    Edited by: Glenn Michaels on Jun 14, 2008 9:52 PM

    Hello Glenn,
    If it helps, here's a scenario about KB on my company system.
    Our call center supervisors are the persons who creates problem and solutions in our KB. They maintain it but don't use IS01 transaction. They use instead  People Centric BSP's for Problem s and solutions (they're integrated in IC webclient with the help of transaction launcher).
    Normally, they prefer creating multiple solutions to one problem, instead of single problem - single solution method. This because some questions may have multiple solutions. They could put all the solutions text in one solution object, but for maintainance purposes we think it's better to create multiple solutions object to every solution text, because if one solution becames obsete, all we have to do is unlink instead of changing the text.
    We don't use LSMW. I don't have much experience in LSMW, but if you use it, be careful to respect KB interval numbers for problems and solutions. We implement an initial set of problems and solutions in our Development system, and we passed to the Quality and Produtive system, with the precious help of sap note '728295 - Transport the SDB customization between two CRM systems' and '728668 -Transport the content of the SDB between two CRM systems'.
    One cool idea to use the KB is using Auto-suggest feature. The idea is to integrate the links between problems and solutions with, for example, service ticket categorization, using BSP Category modeler, and when an agent classifies a ticket, at the top of the screen it will appear the suggested solutions/problems for the classification choosen.
    I think that's all.
    Sorry for my poor english. Today I'm not feeling very inspirated
    Kind regards.
    Edited by: Bruno Garcia on Jun 17, 2008 9:51 PM (added note 728668)

  • Subclass design problems/best practices

    Hello gurus -
    I have a question problem regarding the domain objects I'm sticking in my cache. I have a Product object - and would like to create a few subclasses - say BookProduct and MovieProduct (along with the standard Product objects). These really need to be contained in the same cache. The issue/concern here is that both subclasses have attributes that I'd like to index AND query on.
    When I try to create an index on the subclasses attributes when there are just "standard" products - I get the following error (which only exists on one of the subclasses):
    2010-10-20 11:08:43.280/227.055 Oracle Coherence GE 3.5.2/463 <Error> (thread=DistributedCache:MyCache, member=2): Exception occured during index rebuild: java.lang.RuntimeException: Missing or inaccessible method: com.test.domain.product.Product.getAuthors()
    So I'm not sure the indexing is working or stopping once it hits this exception.
    Furthermore, I get a similar error when attempting to Filter based on that attribute. So if I want to add the following filter:
    Filter filter = new ContainsAnyFilter( "getAuthors", authors );
    I will receive the following exception:
    Caused by: Portable(java.lang.RuntimeException): Missing or inaccessible method: com.test.domain.product.Product.getAuthors()
    What is considered the best practices for this assuming these really should be part of the same names cache? Should I attempt to subclass the extractors to "inspect" the Object for its class type during indexing or applying filters? Or should I just add all the attribute in the BookProduct and MovieProduct into the parent object and just forget about subclassing? That seems to have a pretty high "yuck" factor to me. I'm assuming people have run into this issue before and am looking for some best practices or perhaps something that deals with this that I'm missing. We're currently using Coherence 3.5.2. Not sure if it matters, but we are using the POF format for serialization.
    Thanks!
    Chris

    Hi Chris,
    I had a similar problem. The way I solved it was to use a subclass of the ChainedExtractor that catches all RuntimeException occurring during the extraction like the following:
    * {@link ChainedExtractor} that catches any exceptions during extraction and returns null instead.
    * Use this for cases where you're not certain that an object contains that necessary methods to be extracted.
    * F.e. an object in the cache does not contain the getSomeProperty() method. However other objects do.
    * When these are put together in the same cache we might want to use a {@link ChainedExtractor} like the following:
    * new ChainedExtractor("getSomeProperty.getSomeNestedProperty"). However this will result in a RuntimeException for those entries that
    * don't contain the object with the someProperty. Using the this class instead won't result in the exception.
    public class SafeChainedExtractor extends ChainedExtractor
         public SafeChainedExtractor()
              super();
         public SafeChainedExtractor( String sMethod )
              super( sMethod );
         @Override
         public Object extract( Object entry )
              try
                   return super.extract( entry );
              catch(RuntimeException e)
                   return null;
         @Override
         public Object extractFromEntry( Entry entry )
              try
                   return super.extractFromEntry( entry );
              catch(RuntimeException e)
                   return null;
    }For all indexes and filters we then use extractors that subclassed the SafeChainedExtractor like the following:
    public class NestedPropertyExtractor extends SafeChainedExtractor
         private static final long serialVersionUID = 1L;
         public NestedPropertyExtractor()
              super("getSomeProperty.getSomeNestedProperty");
    //adding an index:
    myCache.addIndex( new NestedPropertyExtractor(), false, null );
    //using a filter:
    myCache.keySet(new EqualsFilter(new NestedPropertyExtractor(), "myNestedProperty"));This way, the extractor will just return null when a property doesn't exist on the target class.
    Regards
    Jan

  • Problem with crystal reports get from All in One Best Practice package

    Hi,
    I tried to use Financial Statements crystal report download from All in One Best Practice package but encounter the below errors. I followed the guide in Manual Steps for Additional Datasource Creation to set up the additional data source. When i try to preview the report, I tried to set all the value to Null but the error message still appear. I believe this has something to do with the infoset but i not sure how to resolve this. Those report which connect to infoset doesnt seem to work for me.
    Errors:
    Failed to retrive data from database. (then i press OK)
    database connector error: 'no item structure data' (then i press OK)
    database connector error: 'RFC_Closed'
    Please advice
    Thank you

    Hi Afzal,
    I think you misunderstood. The crystal report i am talking about is the 23 crystal report template i get from All in One Best Practice package. All i need to do is follow the "Quick Guide to impletement SAP Best Practices for Business Intelligence V4.31" to make those template works. For those template that connect to Database type: SAP Table, Cluster or Function, i can make those work. The problem i facing now are those template (example: Financial Statements) that connect to Database type: SAP Info sets. The error i receive are stated in my first post.
    Please advice

  • Problem installing SAP best practices base line

    I have already installed ecc 6.0 and performed the R2R build procedure.
    I am trying to finish by installing best practices base line using the installation assistance in transaction "/n/smb/bbi" However I am missing lots of ecatt objects.
    Please help, thanks,

    hi,
    I have uploaded the BC sets..but having problem while activating the project..it says ECATT objects/SMB99/CHECK_o0010_B32 doesnot exist
    ECATT objects/SMB99/RZ11_o001_B32 doesnot not exist in system
    210 ECATT objects are missing..i am thinking off ECATT add-on installation using SAINT..   BP_BLERP is the add-on for ECATT..but m not finding it in www.service.sap.com/swdc....ECATTany ideas...tell me about the BP installation using saint
    Thanks
    Rajdeep
    Message was edited by:
            rajdeep sarma

  • Best Practices Installation Problem

    Hi I am trying to install the sap best practices package V1-4.6000 (LATAM)
    I run the transaction code /n/smb/bbi, later I import the layer 0
    building blocks and try to activate the building blogs.
    When I try to activate the B32 (MX) Building block, an error occurs, the message is the following.
    "Activation finished with errors"
    When i check the Pre-requisites the component
    /SMB99/CHECK_O010_B32 (check operating concern) is not activated.
    Could you help me with this issue?

    Hi Adarsh
    In the Best Practices CD, there are a Folder named DOCU in this folder exists an Add-on Installation Guide.
    Overview of the installation
    1) You need to install the Best Practices Installation Assistant, this add-on is in 600\BPINSTAS\DATA folder
    2) You need to install the Best Practices Add-On, this add-on is in 600\ERP\DATA
    3) Follow the installation guide based in your country, for example for me, I followed the word file name BL_Quick_Guide_EN_MX.doc, this file is in the folder 50085672\BL_Mexico\documentation
    Regards

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best Practice for Image placement and Anchored Frames for use in Robohelp 9

    Hi,
    I'm looking for the best practices in how to layout my images in Framemaker 10 so that they translate correctly to Robohelp 9.  I currently have images inside of Anchored frames that "Run into" the right side of my text. I've adjusted the size of the anchored frame so that my text flows correctly around the image. Everything looks good in Framemaker! Yeah! The problem is that when I link my Framemaker document to Robohelp, the text does not flow around my image in the same manner. On a couple of Robohelp screens the image is running into the footer. I'm wondering if I should be using tables in Framemaker in order to get the page layout that I'm looking for. Also, I went back and forth...is this a Framemaker question or is this a Robohelp question. Any assistance would be greatly appreciated.

    I think Jeff is meaning this section of the RoboHelp forums:
    http://forums.adobe.com/community/robohelp/robohelp_framemaker

  • Best Practice to implement row restriction level

    Hi guys,
    We need to implement a security row filter scenario in our reporting system. Following several recommendations already posted in the forum we have created a security table with the following columns
    userName  Object Id
    U1             A
    U2             B
    where our fact table is something like that
    Object Id    Fact A
    A                23
    B                4
    Additionally we have created row restriction on the universe based on the following where clause:
    UserName = @Variable('BOUSER')
    If the report only contains objects based on Fact table the restriction is never applied. This has sense as docs specify that the row restrictions are only applied if the table is actually invoked in the SQL statement (SELECT statment is supposed).
    Question is the following: Which is the best practice recommended in this situation. Create a dummy column in the security table, map into it into the universe and include the object in the query?
    Thanks
    Edited by: Alfons Gonzalez on Mar 8, 2012 5:33 PM

    Hi,
    This solution also seemed to be the most suitable for us. Problem that we have discover: when the restriction set is not applied for a given user (the advantage of using restriction set is the fact that is not always applied) the query joins the fact table with the security table withou applying any where clause based on @variable('USER'). This is not a problem if the secuity table contains a 1:1 relationship betwwen users and secured objects , but (as in our case) relathion ship is 1:n query provide "additional wrong rows".
    By the moment we have discarded the use of the restriction sets. The effect of putting a dummy column based on the security table may have undesired effects when the condition is not applied.
    I don't know if anyone has found how to workaround this matter.
    Alfons

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • IPhone Best Practices - A Work In Progress

    Hello all. I've been tasked with introducing my coworkers into the inner workings of the iPhone, and there are a good number of pointers that I find myself saying over and over again. I'd like to share my best practices with everyone, as well as collect more pointers and opinions from the community at large.
    Care and Handling:
    First - wash your hands, often. Now I know we all do this often anyway, but I'd like to point out that a healthy amount of hand washing will really go a long way to keep your iPhone screen smudge free. The worst offender, unfortunately, is doughnuts. A small layer of sugar will render that area un-tappable, without any real indication that it has done so. If you are frantically tapping the screen on the iPod button and nothing is happening, clean your phone before you do a hard reset.
    Second - Pockets. Keeping your phone in your front pocket is natural and what most of us do. In these summer months, however, keeping your phone in a sweaty front pocket can do a good deal to the dirt level of the screen. If you find yourself cleaning your phone constantly, try a belt clip.
    Lastly - Battery Life. Your iPhone's battery life is in your hands, literally. Being aware of your power consumption and planning accordingly is going to be infinitely more important that the battery's native charge-holding ability. This goes especially for the day of purchase - as tempting as it may be to open the box and activate, immediately running around the house watching YouTube, it is best to let the phone charge for 12 hours before use. Charging the phone every night is an absolute must, skipping a day will kill the battery life as your ride the bottom edge the following day. Most of us have access to a USB port while we're at work, best idea will be to plug in your phone when you sit down at your desk.
    iPod:
    Large Libraries: In the opening weekend, I got many complaints that you cannot manually manage your music. There is a workaround that has made me change the way I work with all of my iPods: the iPhone specific playlist. Simply create a playlist with all of the music you wish to put on your phone and sync that one playlist. This also helps with sync time - you have a start sync and an end sync, not a constant sync all throughout your music management, slowing your computer down in the process.
    TV Shows: I watch a lot of MST3K, which I have organized into iTunes as TV shows, split into seasons, the works. The problem that has arisen, therefore, is the one of selective synchronization - you cannot specifically select the TV show you want to sync to the device, instead getting the choices to sync all, unwatched, or latest shows. This is problematic when each show is 700MB large. Here's the work around - select all of the episodes of a specific show and right click, selecting "Mark as Not New", removing all of the little blue dots from the episodes. Select the one, three, or five episodes, and right click them, selecting "Mark as New", then sync the last one, three, or five unwatched episodes. The shows you selected will sync.
    iPhoto:
    Many users are complaining that iPhoto opens whenever the phone is connected. This is not a preference of the phone, but rather iPhoto. Remember when you first launched iPhoto and it asked you if you wanted to use iPhoto whenever your camera was attached? iPhoto is detecting that your phone is a camera and launching, just as you told it to do.
    Mail:
    POP accounts - too many unread messages: When first adding a POP account, all of the messages downloaded to the phone arrive as unread. Tapping a message, tapping back, and then tapping the next message can get tedious. Here's the workaround - tap the small down arrow to the upper right hand side of the screen, watching closely to the number next to Inbox. When that number goes down by one, tap the arrow again. If that number hasn't gone down yet, wait a sec, and do not try to tap tap tap tap tap, you'll flood the input queue and crash Mail.
    Syncing Mail accounts - All too often people blame the iPhone when their mail does not work. A perfect test is sync you accounts from Mail. If they work in mail, they'll work on the phone, if they are unreliable in Mail, they will also be unreliable on the phone. The Mail client on the iPhone is just as powerful as any other mail client in terms of how it connects to mail servers, if you are having problems you need to check your settings before blaming the hardware. If you prefer to leave your install of Mail.app alone, create a new user account on your Mac, set up all of the accounts you want there, and use iTunes to sync that data to the phone. Make sure to remove that portion of sync from your actual user account's instance of iTunes, however, or it will all sync back.
    This message has not been downloaded from the server: This message has snagged a couple users, but upon investigation, these users have filled their iPhones to the absolute brim with music and video. It hasn't been downloaded from the server because there is no space to download to - this also applies to the Camera application dumping to the Home screen. Because there is no space, it can't add any new data. Make some room, then be patient as the mail client gets to that message in cleanup (often a sync or reboot will clear it up).
    Safari:
    Safari and iPod: Many users have reported iPod stopping in the middle of browsing, often pouting and pursing their lips crying, "This is terrible, I can't even browse the web and listen to music at the same time?". I then check their phone, and lo and behold they have upwards of eight separate pages open at the same time. This device (like every other computer out there) has a finite amount of memory, each page taking up a significant portion depending on how busy the page is. I've routinely gotten through entire albums while browsing through Safari, but I've got one page open in total, and it's usually mostly text. Keep it to one or two pages open and iPod will run forever if you let it.
    Web Apps: "This web app is terrible, it keeps booting me to Home!" When was your last reboot? How many other pages are open? In the same vein as Safari and iPod, Web Apps need a good deal of breathing room - give it to them. Close down other pages, stop iPod, or even reboot. Give the app a clean slate and it will perform, every time. iPhoneRemote users will attest to this.
    iCal:
    Multiple Calendars - Default Calendar: When adding a new appointment, it adds to the default calendar. Appointments can't be shunted to the correct calendar until after sync anyway, so create an "iPhone" calendar and make that the default. Because it's in that calendar, you'll know enough to move it to the appropriate calendar after sync.
    Please feel free to add your own best practices, and ask questions, too.

    is there any application you can get for the iphone to enlarge text and phone numbers ?
    If included with an email or on a website, yes with no application needed.
    If you are referring to the text size for your iPhone's contact list, no.
    can you insert a phone number from your contact list into a text message ?
    No.
    i cant seem to figure it out, does the alarm clock work if you turn off the phone at night,
    No - powered off with the iPhone means powered off. Any phone that provides for this is not powered off - it is in deep sleep or deep standby mode, which the iPhone does not support. If you don't want your phone ringing or don't want to receive SMS at night but you want to use the iPhone's alarm feature as a wake-up alarm, you can turn on Airplane Mode before going to bed, which will also conserve the battery if your iPhone is not plugged in at night.
    can you send a multi media text message ?
    No.

  • LRCC Face recognition - best practices?

    Ok so we are all new to the wonderful world of face recognition in LR.  I'm trying to work out what would be the best practices for using this.
    A little bit of background - I have a catalog of over 200,000 images.  In addition to portrait and wedding clients, a significant part of my work is with models and another significant part of my work is theatre photography.  I have be wanting some sort of face recognition to help with both for some time.
    What are your namining conventions for people? - here's mine:
    Ideally I would label people as "surname, firstname" so that I can keep members of a family together in "named people" display, but commas are not allowed in names.  Also the professional name of many models doesn't fit that pattern eg "Strawberry Venom" or "Cute as Sin" are to models I have worked with.
    I am trying to come up with a sensible naming convention at the moment it is "Surname/ Firstname" for clients, theatre folk and friends/family.  Models are still a problem, at present I am thinking of "Surname/ Firstname (model name(s))"  While I may not be able to remember the real names of models, I do usually know the names from model releases.  This naming will still permit me to filter/find them in the keyword List panel by just entering the model name.
    On final addition I am making to this this naming convention is the use of a hashtag suffix to the name:  #F for friends and family, #C for clients, #T for theatre/actors and #M for models.  This enables me to filter on just models, or just actors, or just friends and familiy.  Where people fall into multiple categories I add multiple hashtags.  So photos of me would be keyworded with "Butterfield/ Ian #F #T"
    Unknown / unidentified people.
    What I am not yet certain about is how to handle unknown / unidentified people.  Unidentified people fall into a number of different categories.
    People I don't know and I am never likely to know (Eg random strangers on the street, local tour guides on holiday, random people in the background etc)
    This group is relatively easy to deal with - that is to simply delete the face recognition, End of story.
    People I don't know the names of yet but I am likely to find out (Eg actors in a production for which I don't have a programme)
    For these people I am making up a unique name using the format "date/ Context-Gendernn" Eg an unknown male actor at Stockport Garrick Theatre would be named as "20150313/ SGT-M01"  Although this may appear a complex solution it has a number of advantages.  If/when I do learn the name of the individual (Eg I photograph them in a different production) it is simply a case of renaming the people keyword.  Creating a unique name and not simple assigning all unknowns to a bucket name will help the face recognition algorithms find this person without it being confused by have different faces assigned to the same name. I am also using the hashtage #U to make it easier to filter the unknown faces when I need to.
    People I don't know the names of and there is only a slim possibility of meeting/photographing again (Eg guests as a client weeding)
    It feels as though I out to just delete the face recognition and have done with it, and this is what I would do except for thing. Other than manually drawing face regions I have not yet found a way to get lightroom to rescan a folder for faces if you have previously deleted the face recognition.  This means that deleting face regions from a large number of people is something that cannot be easily reversed.  I might just leave these people in the "Unnamed People" category... at lease until such time as there is a way to rescan a folder or colectoin.
    Summary
    My practices are still evolving. But I hope these thoughts and idea will help others think through the issues and come up with solutions that work for their situation.  I am interested in hearing how other people are using the face recognition system.  Especially if anyone is aware of any 'best practices' that Adobe or anyone else has recommended.

    Glad it helped.
    Yes and no.  You can still put the people keywords into hierarchies within the keyword list - you can arrange them just like any other keywords.so you just create a "smith family" keyword and store "john smith" under it.  What you can't do is apply BOTH smith family and john smith the the same face.
    My use of the hash tags came about because I initially had a top level keyword for models, one for clients, one for theatre peple and one for family and firends.  Then discovered that some of the theatre folk were also clients (headshots) and what to do when a friend is also a client.  So the hash tag system means a person can be both a friend, a model, an actor as well as being a client!  (#T #C #M #F).

  • Best Practice on using and refreshing the Data Provider

    I have a �users� page, that lists all the users in a table - lets call it master page. One can click on the first column to of the master page and it takes them to the �detail� page, where one can view and update the user detail.
    Master and detail use two different data providers based on two different CachedRowSets.
    Master CachedRowSet (Session scope): SELECT * FROM UsersDetail CachedRowSet (Session scope): SELECT * FROM Users WHERE User_ID=?I want the master to be updated whenever the detail page is updated. There are various options to choose from:
    1. I could call masterDataProvider.refresh() after I call the detailDataProvider.commitChanges() - which is called on the save button on the detail page. The problem with this approach is that the master page will not be refreshed across all user sessions, but only for the one saving the detail page.
    2. I could call masterDataProvider.refresh() on the preRender() event of the master page. The problem with this approach is that the refresh() will be called every single time someone views the master page. Further more, if someone goes to next page (using the built in pagination on the table on master page) and clicks on a user to view its detail and then close the detail page, it does not keep track of the pagination (what page the user was when he/she clicked on a record to view its detail).
    I can find some work around to resolve this problem, but I think this should be a fairly common usage (two page CRUD with master-detail). If we can discuss and document some best practices of doing this, it will help all the developers.
    Discussion:
    1.     What is the best practice on setting the scope of the Data Providers and CahcedRowSet. I noticed that in the tutorial examples, they used page/request scope for Data Provider but session scope for the associated CachedRowSet.
    2.     What is the best practice to refresh the master data provider when a record/row is updated in the detail page?
    3.     How to keep track of pagination, (what page the user was when he/she clicked on the first column in the master page table), so that upon updating the detail page, we cab provide user with a �Close� button, to take them back to whaterver page number he/she was.
    Thanks
    Message was edited by:
    Sabir

    Thanks. I think this is a useful information for all. Do we even need two data providers and associated row sets? Can't we just use TableRowDataProvider, like this:
    TableRowDataProvider rowData=(TableRowDataProvider)getBean("currentRow");If so, I am trying to figure out how to pass this from master to detail page. Essentially the detail page uses a a row from master data provider. Then I need user to be able to change the detail (row) and save changes (in table). This is a fairly common issue in most data driven web apps. I need to design it right, vs just coding.
    Message was edited by:
    Sabir

Maybe you are looking for

  • How do i download itunes to windows? nothing i have tried is working

    I went to the support tab on the itunes page to search for ways to download itunes to windows because i tried pressing the download now button but nothing happened. No window popped up asking to run or save program as usual. Someone else was having t

  • No name appears in sms N95

    Hi I bought nokia n95 phone two days ago. It is a very good phone but the only problem am facing is that whenever I receive an sms, the number appears only and not the name though the name is saved in my contacts Please help me on what to do I have u

  • GET_BLOCK Error Importing or Copying an Application - Report Layout Issue

    Greetings Environment Apex 4.1.0.00.32 Database 11.2 Listener: Apex Listener running under Weblogic I have an application that is throwing an error when I try to copy or import it using the Apex development environment. The error is: ORA-20001: GET_S

  • Report Painter Characteristics Long Text

    Hi all, I'm trying to correct a report painter report that was developed by one of our consultants some time ago.  The same consultant also created the library from which this report is based. The library uses our ZZSPLITT table.  Apparently this is

  • Language Dependent Master Data

    Hi, I Have a Info object which has data for more than 1 language, How do I control the text of the info object by language. Thanks in advance.