JSF best practice for managed bean population

hi;
consider a simple session scope managed bean with 1 attribute which is calculated dynamically.
there seem to be 3 ways of populating that attribute (that i could think of):
1. have the bean's get<attribute> get the data and populate it
2. have the constructor of the bean get the data and populate it
3. have an action get the data, and set it from the action code (maybe use a valueBinding to get the bean if the bean which needs population is not the same bean that holds the action) .
option 1 seems ok for situations where i need my data to be calculated for every valueRef for that attribute but is an overhead for data that need not be calculated for every valueRef.
option 2 will ensure that te data is calculated only once per session. but it still looks kinda strange to do it
with option 3, it seems you should populate the dynamic content of the next page and its intuitive but some places in the page might be common content with dynamic data and this has nothing to do with navigation and the next page to be displayed.
is there a best practice of how to populate a managed bean data?
are different cases fit different ways?
any thoughts are appriciated.

I think it should be either Option 2 or Option 3.
Option 2 would be necessary if the bean data depends on some request parameters.
(Example: Getting customer bean for a particular customer id)
Otherwise Option 3 seems the reasonable approach.
But, I am also pondering on this issue. The above are just my initial thoughts.

Similar Messages

  • Best practice for backing bean population? (also, ActionListener RANT)

    Hello,
    I am about 3/4 of the way through development of a small to medium size JSF application. Sometimes I really like JSF, but much of the time I am left puzzled or frustrated for hours trying to find workarounds to JSF's bugs/glitches and design flaws.
    For example, early on, I was impressed with how easily it was to invoke a method from a page using an actionlistener. Now that I'm actually building things with JSF, the actionlistener funtionality still seems cool, but incredibly half baked. I find myself using request parameters LIKE CRAZY to work around the fact that JSF doesnt support passing parameters directly to backing bean methods. This feels awkward and wrong considering the fact that JSF is intended to abstract the HTTP underpinnings. To add insult to injury, I often have to iterate through ALL of the request parameters looking for one that has an id with an ending matching my desired property name (since JSF appends it's own crap to the beginning). I don't like doing things in a hacky way. This seems very hacky, and I feel dirty doing it.
    So, my first question is, what is the best practice for populating backing beans??? How do others accomplish this. I can think of several other approaches, but none feel less hacky.
    Second, are there plans in the next spec (please say there are) to allow parameters to be passed to backing bean methods? If not, WHY THE HECK NOT?
    Even though JSF expert group people have been conspicuously absent from this forum of late, I'd really appreciate responses from you as well.
    Thank you for your thoughts.

    Hi BrownBear,
    I've been using JSF for about 6 months now and I'd be glade to help as much as I can.
    Concerning parameters, I'm not sure what your issue is but I use the f:param tag to pass them. If you could post an example of what you are trying to do, I could see exactly what your issue is. Maybe the f:param can't help you.
    As for best practice for populating backing beans, I personaly try to let JSF do as much as possible. For example, if I have a backing bean with five properties, I make sure that they all are on the JSP page the bean serves. If one of the property is just there as an Id like, lets say, a Person ID (DB row key), then I put it on my JSP page as a hidden input field. I do the same with the properties that only for display, if I want them to be back in my bean when request comesback.
    Hope this help some how. Please, feel free to ask specific questions related to your specific problem and I monitor this post and trnasfer to you the ;little JSF experience I have.
    I'm pretty happy with JSF as it is but it sure needs improvements. :) What the heck, it's version 1.01 after all, and the next release should be a great one with the integration of JSTL.
    Cheers

  • Best practices for Manage beans and Navigation flow

    Hi,
    I have the following scenarios along with the possible solutions. I need confirmation from JSF experts, which is recommended way of doing it.
    Scenario
    I need to show a list of item from database/config file to the user. User will select the show details from the menu and the details page will be shown with list of items.
    User - > Menu - >Show Details -> Details Page.
    Components
    DetailsBean.java
    menu.jsp
    Details.jsp
    Solution 1:
    a) When user click show details, call DetailsBean.populate(),
    b) DetailsBean.populate() method populate list from DB, store it as a property in it and returns the String for next flow
    c) User will be navigated to the Details.jsp using the <from-outcome> for menu.jsp.
    d) Details.jsp will read the property from DetailsBean() to show the list of items on page.
    Solution 2
    a) When user click show details, navigate user to the Details.jsp.
    b) Details.jsp is tied with the DetailsBean.populate() method , will get the list and print the page.
    I am little confused which one to be used and when ?. Are there any scenarios where we need to use specifically solution 1 , any sometimes specifically solution 2?
    In Brief
    *" Should we populate bean first and let the jsp page, just draw what ever is there in the bean, without thinking much about how the bean get populated"*
    or
    *"Should we let JSP to call methods on bean, get contents directly and show to user, no need to store in bean."*
    Let me know, my you need more information about my question.
    Expecting an quick reply on this.
    Thanks,
    Sudhir

    In real world, I never use navigation cases. I always declare action methods void (or return null, if you want). I just show results at the same page, if necessary with help of the 'rendered' attribute. And then for plain page-to-page navigation I just use GET all the way using h:outputLink or plain HTML <a> elements, but not with commandLinks. Finally it's all much better for the User Experience.

  • JSF - Best Practice For Using Managed Bean

    I want to discuss what is the best practice for managed bean usage, especially using session scope or request scope to build database driven pages
    ---- Session Bean ----
    - In the book Core Java Server Faces, the author mentioned that most of the cases session bean should be used, unless the processing is passed on to other handler. Since JSF can store the state on client side, i think storing everything in session is not a big memory concern. (can some expert confirm this is true?) Session objects are easy to manage and states can be shared across the pages. It can make programming easy.
    In the case of a page binded to a resultset, the bean usually helds a java.util.List object for the result, which is intialized in the constructor by query the database first. However, this approach has a problem: when user navigates to other page and comes back, the data is not refreshed. You can of course solve the problem by issuing query everytime in your getXXX method. But you need to be very careful that you don't bind this XXX property too many times. In the case of querying in getXXX, setXXX is also tricky as you don't have a member to set. You usually don't want to persist the resultset changes in the setXXX as the changes may not be final, in stead, you want to handle in the actionlistener (like a save(actionevent)).
    I would glad to see your thought on this.
    --- Request Bean ---
    request bean is initialized everytime a reuqest is made. It sometimes drove me nuts because JSF seems not to be every consistent in updating model values. Suppose you have a page showing parent-children a list of records from database, and you also allow user to change directly on the children. if I hbind the parent to a bean called #{Parent} and you bind the children to ADF table (value="#{Parent.children}" var="rowValue". If I set Parent as a request scope, the setChildren method is never called when I submit the form. Not sure if this is just for ADF or it is JSF problem. But if you change the bean to session scope, everything works fine.
    I believe JSF doesn't update the bindings for all component attributes. It only update the input component value binding. Some one please verify this is true.
    In many cases, i found request bean is very hard to work with if there are lots of updates. (I have lots of trouble with update the binding value for rendered attributes).
    However, request bean is working fine for read only pages and simple binded forms. It definitely frees up memory quicker than session bean.
    ----- any comments or opinions are welcome!!! ------

    I think it should be either Option 2 or Option 3.
    Option 2 would be necessary if the bean data depends on some request parameters.
    (Example: Getting customer bean for a particular customer id)
    Otherwise Option 3 seems the reasonable approach.
    But, I am also pondering on this issue. The above are just my initial thoughts.

  • Best Practice for Managing a BPC Environment?

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

    My company is currently running a BPC 5.1 MS environment and will soon be upgrading to version 7.0 MS.  I was wondering if there is a white paper or some guidance that anyone can give with regard to the best practice for managing a BPC environment.  Which brings to light several questions in my mind:
    1.  Which department(s) in a company should u201Cownu201D the BPC application? 
    2. If both, whatu2019s SAPu2019s recommendation for segregation of duties?
    3. What roles should exist within our company to manage BPC?
    4. What type(s) of change control is SAPu2019s u201CBest Practiceu201D?
    We are currently evaluating the best way to manage the system across multiple departments, however there is no real business ownership in the system, which seems to be counter to the reason for having BPC as a solution in the first place.
    Any guidance on this would be very much appreciated.

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

  • Best practices for managing Movies (iPhoto, iMovie) to IPhone

    I am looking for some basic recommendations best practices on managing the syncing of movies to my iPhone. Most of my movies either come from a digital camcorder into iMovie or from a digital Camera into iPhone.
    Issues:
    1. If I do an export or a share from iPhoto, iMovie, or QuickTime, what formats should I select. I've seem 3gp, mv4.
    2. When I add a movie to iTunes, where is it stored. I've seen some folder locations like iMovie Sharing/iTunes. Can I copy them directly there or should I always add to library in iTunes?
    3. If I want to get a DVD I own into a format for the iPhone, how might I do that?
    Any other recommedations on best practices are welcome.
    Thanks
    mek

    1. If you type "iphone" or "ipod" into the help feature in imovie it will tell you how.
    "If you want to download and view one of your iMovie projects to your iPod or iPhone, you first need to send it to iTunes. When you send your project to iTunes, iMovie allows you to create one or more movies of different sizes, depending on the size of the original media that’s in your project. The medium-sized movie is best for viewing on your iPod or iPhone."
    2. Mine appear under "movies" which is where imovie put them automatically.
    3. If you mean movies purchased on DVD, then copying them is illegal and cannot be discussed here.
    From the terms of use of this forum:
    "Keep within the Law
    No material may be submitted that is intended to promote or commit an illegal act.
    Do not submit software or descriptions of processes that break or otherwise ‘work around’ digital rights management software or hardware. This includes conversations about ‘ripping’ DVDs or working around FairPlay software used on the iTunes Store."

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • What is a best practice for managing a large amount of ever-changing hyperlinks?

    I am moving an 80+ page printed catalog online. We need to add hyperlinks to our Learning Management System courses to each reference of a class - there are 100s of them. I'm having difficulty understanding what the best practice is for consistent results when I need to go back and edit (which we will have to do regularly).
    These seem like my options:
    Link the actual text - sometimes when I go back to edit the link I can't find it in InDesign but can see it's there when I open up the PDF in Acrobat
    Draw an invisible box over the text and link it - this seems to work better but seems like an extra step
    Do all of the linking in Acrobat
    Am I missing anything?
    Here is the document in case anyone wants to see it so far. For the links that are in there, I used a combination of adding the links in InDesign then perfecting them using Acrobat (removing additional links or correcting others that I couldn't see in InDesign). This part of the process gives me anxiety each month we have to make edits. Nothing seems consistent. Maybe I'm missing something obvious?

    what exatly needs to be edited, the hyperlink or content or?

  • Best Practice for Managing Cookies in an Enterprise Environment

    We are upgrading to IE11 for our enterprise. One member of the team wants to set a group policy that will delete all cookies every time the user exits IE11.  We have some websites that users access that use cookies to track progress in training,
    but are deleted when the user closes the browser.  What is the business best practice regarding deleting all history, temp internet files and, especially cookies when closing a browser.
    If you can point me to a white paper on this topic, that would be helpful.
    Thanks
    Bill

    Hi,
    Regarding cookie settings, we could manage IE privacy settings using Administrative templates for IE 11:
    Administrative templates and Internet Explorer 11
    Delete and manage cookies
    The Administrative templates for IE 11, we could download from here:
    Administrative Templates for Internet Explorer 11
    Hope this may help
    Best regards
    Michael Shao
    TechNet Community Support

  • Best practice for management of Portal personalisation

    Hi
    I've been working in our sandpit client figuring out all portal personalisation and backend/frontend customisation for ESS (with some great help from SDN). However I'm not clear on what best practice is when doing this for 'real'.
    I've read in places that objects to be personalised should be copied first. Elsewhere I've read about the creating of a new portal desktop for changes to the default framework page etc
    What I'm not clear on is the strategy for how you manage and control all the personalisation changes.  For example, if you copy an iview and personalise where do you copy it to? Do you create "Z" versions of all the folders of the standard content or something else?
    Do you copy everything at the lowest possible object level and copy or highest?
    Implications of applying support or enhancement packs?
    We're going pretty vanilla to begin with in that there will just be a single ESS role so things should be fairly simple from that point of view, but I've trawled all around SDN without being able to find clear guidance on the overall best pratice process for personalisation.
    Can anyone help?
    Regards
    Phil
    Edited by: Phil Martin on Feb 15, 2010 6:47 PM

    Hi Phil,
    To keep things organized I would create an new folder (so don't use the desktop one). The new folder should sit under Portal Content. In that folder create a folder for ESS, then inside that one for iViews (e.g. COMPANY_XYZ >>> ESS >>> iViews) then copy via delta link the iViews you want from the standard SAP folders.
    Then you will have to create another folder for your own roles (e.g. ESS >>> Roles). Inside this folder create a new role and add your iViews (again via delta link). Or you could just copy one of the standard SAP roles into this folder and not bother with copying the iViews. What you do depends on how many changes you intend to make, sometimes if the changes are only small it is easier to copy the entire SAP role (via delta link) and make your changes.
    You should end up with something link this in your PCD:
    Portal Content
         COMPANY_XYZ
              ESS
                   iViews
                        iView1
                        iView2 etc..
                   Roles
                        Role1
                        Role2 etc...
    Then don't forget to assign your new roles to your users or groups.
    Hope that makes sense.
    BRgds,
    Simon
    Edited by: Simon Kemp on Feb 16, 2010 3:56 PM

  • Advice re best practice for managing the scan listener logs and list logs

    Hi friends,
    I've just started a job as a RAC dba administrator for some big 24*7 systems, I've never worked with clusterware and RAC.
    2 Space problems
    1) Very large listener_scan2.log in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/trace folder
    2) Heaps of log_nnn.xml files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert folder (4Gb used up)
    Welcome advice on the best way to manage these in the short term (i.e. delete manually) and the recommended practice and safest way (adri maybe not sure how it works with scan listeners)
    Welcome advice and commands that could be used to safely clean these up and put a robust mechanism in place for logfile management in RAC and CLusterware systems.
    Finally should I be checking the log files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert regulalrly ?
    My experience with listener logs is that they are only looked at when there are major connectivity issues and on the whole are ignored.
    Thanks for your help,
    Cheers, Rob

    Have you had any issues that require them for investigative purposes? If not, just remove them. Are the logs required for some sort of audit process? If yes, gzip them to a location where you can use your OS tape backup policies to retain them for n-days. Once you remove an active file, it should recreate the file and continue without interruption.

  • Best Practice for managing variables for multiple environments

    I am very new to Java WebDynPro and have a question
    concerning our deployments to Sandbox, Development, QA,
    and Production environments.
    What is the 'best practice' that people use so that if
    you have information specific to each environment you
    don't hard-code it in your Java WebDynPro code.
    I could put the value in a properties file, but how do I
    make that variant?  Otherwise I'd still have to make a
    change for each environment to the property file and
    re-deploy.  I know there are some configurations on the
    Portal but am not sure if that will work in my instance.
    For example, I have a URL that varies based on my
    environment.  I don't want to hard-code and re-compile
    for each environment.  I'd prefer to get that
    information on the fly by knowing which environment I'm
    running in and load the appropriate URL.
    So far the only thing I've found that is close to
    telling me where I'm running is by using a Parameter Map
    but the 'key' in the map is the URL not the value and I
    suspect there's a cleaner way to get something like that.
    I used Eclipse's autosense in Netweaver to discover some
    of the things available in my web context.
    Here's the code I used to get that map:
    TaskBinder.getCurrentTask().getWebContextAdapter().getRequestParameterMap();
    In the forum is an example that gets the IP address of
    the site you're serving from. It sounds like it is going
    to be or has been deprecated (it worked on my system
    right now) and I would really rather have something like
    the DNS name, not something like an IP that could change.
    Here's that code:
    String remoteHost = TaskBinder.getCurrentTask().getWebContextAdapter().getHttpServletRequest().getRemoteHost();
    Thanks in advance for any clues you can throw my way -
    Greg

    Hi Greg:
         I suggest you that checks the "Software Change Managment Guide", in this guide you can find an explication of the best practices to work with a development infrastructure.
    this is the link :
    http://help.sap.com/saphelp_erp2005/helpdata/en/83/74c4ce0ed93b4abc6144aafaa1130f/frameset.htm
    Now if you can gets the ip of your server or the name of your site you can do the next thing:
    HttpServletRequest request = ((IWebContextAdapter) WDWebContextAdapter.getWebContextAdapter()).getHttpServletRequest();
    String server_name = request.getServerName();
    String remote_address =     request.getRemoteAddr()
    String remote_host = request.getRemoteHost()
    Only you should export the servlet.jar in your project properties > Build Path > Libraries.
    Good Luck
    Josué Cruz

  • Any "best practices" for managing a 1.3TB iPhoto library?

    Does anyone have any "best practices" or suggestions for managing and dealing with a large iPhoto library?  I currently have a 1.3 TB library.  This is made up of anything shot in the past 8 years culminating with the past 2 years being 5D Mark II images.  This also includes a big dose of 1080P video shot with the camera.  These are only our family photos so I would hate to break up the library into years as that would really hurt a lot of iPhotos "power".
    It runs fine in day to day use, but I recently tried to upgrade to iPhoto 11 and it crashes repeatedly during the upgrading library process.
    (I have backups, so no worries there.)
    I just know with Lion and iPhoto 9 being a bit old my upgrade day is coming and I'm not sure what my path is.

    If you have both versions of iPhoto on your Mac then try the following; While running iPhoto 09 create a new, test library and import a few photos into it.  Then try to convert that test library with iPhoto 11.  If it converts OK then your big library is causing the problem.
    If that's the case you can try rebuilding your working library as follows:  make a temporary, backup copy of the library and try the following:
    launch iPhoto with the Command+Option keys held down and rebuild the library.
    select the options identified in the screenshot.
    Click to view full size
    Once rebuild try converting it to iPhoto 11.
    NOTE:  if you already have a backup copy of your library you don't need to make the temporary backup copy.
    OT

  • Best practice for managing Apple ID's in an enterprise enviroment.

    We have just started to incorporate iPads into our Corporate enviroment.  We have had them in use for the last couple of months and to be fair they have worked well but i guess now is the time to try and workout some of the issues we are experiencing with them, the re-occuring issues that just seem to keep haunting us.
    As part of the deployment we gave those with current apple id's the option of using their own, particularly those Execs who already had iPads and iPhones.  We are currently using active sync with our exchange server for access to email.
    How do other people manage Apple ID's?
    How do other people handle the password change scenario.. we have had users on holiday where their passwords have expired and they have been unable to access the email...also the issue of you reset your password on the network but you still need to re-authenticate it on the iPad...does anyone have a decent app that can get us round this?
    what issues have others found?

    Hi ToRnUK,
    The information below will explain some options for managing multiple iPads in your business.
    Apple - Support - iPad - Enterprise
    http://www.apple.com/support/ipad/enterprise/
    Apple - iPad in Business - IT Center
    http://www.apple.com/ipad/business/it-center/
    Cheers,
    Judy

Maybe you are looking for

  • Question: How to display rating in SURVEY VIEW

    Hi, I've been using Lightroom for years. I love to use the SURVEY mode.  But today, I couldn't see the rating stars under each photo in Survey mode; I only see the flag status.  Now, it's probably something I did, like hitting a shortkey or something

  • ORACLE LOG ERRORS - Bug in oracle??

    Hi, we are using oracle log errors to capture oversized err records for varchar2 type fields. Sql is something like: INSERT INTO abc (col1,col2) VALUES ('asdsass','weqwewqee') LOG ERRORS INTO abc_err('1') REJECT LIMIT UNLIMITED; But, data captured in

  • Exif metadata not displayed in Metadata panel & searching

    Is there a way to customize the exif and/or IPTC data shown in the Metadata panel in LR 5? And is there a way to search and/or filter on metadata not in that panel? I needed to find some photos that have the exif category "art filter" and I cannot se

  • Checke for letters in string (9i)

    Hi All, I have a string, something like "01A" and I would like to: 1 - Check if it end with letters, it can be any letter. 2 - split the numbers and the letters to get for example "01" and "A" in seperate variables. I know that it would be very easy

  • Urgently need support with bug on Android

    Hi, Our client, ranked among the world's top ten media groups will present its financials in a few weeks. Unfortunaltely animations created with Edge Animate don't display correctly in Android versions >= 4.2 Here is a minimal animation to reproduce