Help against application help

Hi experts
In my SAP configuration the application help giving error as follows,plz suggest me the possible ways to correct this,the error is as follows.
"could not find file
i.p address
SAPHelp\PlainHTML\helpdata
EN\fc\16c1a8dc4911d299970000e83dd9fc\frameset.htm" for details please check the entries in the file SAPDOCCD.LOG in the windows directory.
gaurav

Hi Gaurav,
did you check the settings of the shared folder? Perhaps you don't have the permissions for this share. Are the server and your PC in the same domain?
Did you try to open the file
i.p address
SAPHelp\PlainHTML\helpdata
EN\fc\16c1a8dc4911d299970000e83dd9fc\frameset.htm
directly on the server where the helpfiles are stored?
Best regards,
Kai

Similar Messages

  • Configuring Forms Builder 10g to run (Ctrl-R) against application server

    I would like to run forms from my working directory against our development application server and config, rather than having a local OC4J running. I have configured this by setting the application server URL runtime preference to "http://<server>.<domain>:80/forms/frmservlet?config=dev". This works fine.
    However, when I run the Form, I get "FRM-40010: Cannot read form H:\Neil\TAF00180.fmx". This is because we open our FMBs from a locally mapped drive (i.e. I opened the FMB from H:\Neil\TAF00180.fmb). When run against the application server it seems Forms Builder trys to run an executable with the same path as the FMB - H:\Neil\TAF00180.fmx, but the application server has no knowledge of a mapped H-drive.
    The only solution I can think of is to have an H-drive mapped on the application server to the same location as our locally mapped network drive.
    This doesn't seem like the correct solution to me. How should we configure Oracle Forms 10g and the application server to allow us to run our own working versions against the applications server and config?

    Slava Natapov wrote:
    I guess Neil concern is that OC4J and Forms Server are using own formsweb.cfg and .env files
    And settings can be different.
    For example what if local formsweb.cfg configured to run Jinitiator, but server formsweb.cfg configured for JPI, or they configured with different lookandfeel parameter, or local and server default.env use different NLS_DATE_FORMAT?Hi Slava, Hi Neil
    I think this is more related to "discipline" for the developers....
    @ Neil : How many developers do you have in your team ?
    Are developers allowed to "customize" their formsweb.cfg and .env files ?
    or more customized basejinit.htm, registry.dat, additional jar files ... etc
    or more using different oracle_home for their installation of Developer Suite.
    @ Slava : Using local formsweb.cfg and .env files might not be a problem if every developer machine is installed the same way (i.e. the same oracle_home).
    Also, I don't think that using workingDirectory to network shared connection for each developer is a solution
    [dev1]
    workingDirectory= H:/dev1
    [dev2]
    workingDirectory= H:/dev2
    And at least this is requiring customized *.env files ! as the forms_path should be set for runtime in it for each developer.
    The registry entry is only for the builder to find the path to mmb and pll files for example, and compile well, but the runtime is using *.env files.
    And what about developing in windows and deploy it to linux ?
    I recently managed a project with junior and undisciplined developers. The boss had the same concern about each network drive per developer.
    My concern was how to eliminate all subjective considerations..
    And my answer was "Subversion" no mess with local or network version with the "copy of the copy of the copy of the fmb files".
    So all forms_path were set to C:\myApp\forms etc and each developer made a checkout and commit versions twice a day.
    And the Application Server was also "checked out" to have the latest revision and then ran it into a pre-version for production.
    If I had to make some important changes in the *.olb file or the *.pll files then I made a commit for all developers and then re-compile all *.fmb and re-commit and checkout again.
    I really don't matter if the developer used jinitiator or jpi but the right working copy.
    And standalone OC4J was enough to make basics tests for each developer.
    More tests were made by project managers at the application server level and then adjustments (if required) where sent to developers.
    Hope this helps.
    Regards,
    Jean-Yves

  • Metric Collection Errors against Application Server 10.1.3.1

    Hi,
    I'm getting metric collection errors like below :
    Target midtier_2.mn224_HTTP Server
    Type Oracle HTTP Server
    Metric Resource Usage
    Collection Timestamp 06-Dec-2007 14:12:22
    Error Type Collection Failure
    Message Missing Properties : [version]
    I'm running Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    The version of the midtier Application Server installation is 10.1.3.1.0
    Can anyone help me with this ?

    Save yourself some heartburn and upgrade to 10.2.0.3...

  • How to create and use a boot volume clone?

    Hello. My name is Cassiano and I'm NOT a system administrator. I'm an architect. I design and build houses, not systems. And the reason I'm telling you that is because I need your help.
    The short version of my problem is this: "How do I create a boot volume clone", but I think I need to tell you the whole story.
    Here at the office I work we have 18 iMacs and 1 Xserve running OS X Server 10.4.9.
    Although I'm an architect I'm responsible to keep the computers up and running.
    This is not because I want, but just because I'm the one here that knows a bit more about this stuff and although I'm not an expert, I know quite a lot about this stuff. I did pretty much everything here. Configured the network, firewall, installed the Xserve, etc, but of course that if something breaks we just take the computer to apple store and wait for it to be fixed
    The Xserve has 3 HDs inside. 1 at bay1 which is the boot and 2 others on bay2 and bay3. The HDs on bay2 and bay3 are working as one using SoftRAID and we use this volume to store all of our data. That is the reason we chose to mirror them using softRaid. If one fails, the other has everything and we just have to get a new one.
    Last week we decided to setup the Xserve to be our mail server as well, so I turned on the Mail Service and after a lot of reading I managed to configure it correctly. In fact we have our MX records pointing to a company that relays the email to our Xserve, so it is not really accessible from the outside world.
    But this Mail thing made me loose my sleep. The reason is: before that our Xserve was used only as a file repository. So if it would break or something all we have to do is take the HD out of it, send it to apple store and plug the HD on another computer. All of our files would be safe and everybody would be happy. Of course this is an over simplificated situation, but that is pretty much how it would go.
    But now everything is different. Every person here has the email account configured to the Xserve, which means that if it breaks or something goes wrong, no one will be able to send or receive emails. The new messages will not be lost, because the company that relays the emails to us will keep them there, but things will get chaotic here.
    When reading this document here: http://developer.apple.com/documentation/MacOSXServer/Conceptual/XServerProgrammingGuide/Articles/abpinstall.html I saw this section that said: "Cloning the Boot Volume"
    Where it says: The use of the boot volume clones is entirely to ensure the highest availability possible for your system volume, not your client data.
    So, finally, my question is. How do I implement this "boot volume clone"? If I buy a new HD and place it on bay4 will I be able to do that? How?
    After doing it if something goes wrong how do I use the "clone volume"?
    Thank you for your patience and any help is greatly appreciated.
    Best regards,
    Cassiano Forestier
    Specs of the Xserve
    Machine Name: Xserve
    Machine Model: Xserve1,1
    Processor Name: Dual-Core Intel Xeon
    Processor Speed: 2 GHz
    Number Of Processors: 2
    Total Number Of Cores: 4
    L2 Cache (per processor): 4 MB
    Memory: 1 GB
    Bus Speed: 1.33 GHz
    Boot ROM Version: XS11.0080.B00
    SMC Version: 1.11f5
    LOM Revision: 1.2.1

    Yes, your server is becoming central to your business universe. Mine, too.
    Boot from DVD, and use Disk Utility to create the clone. Another way of looking at this is as a photocopy or snapshot or block-by-block copy of the state of the disk at the moment the clone is made. You can then either boot from the clone (if needed; assuming the clone is on a bootable device), or boot from the DVD and copy the clone back to the original (or replacement) disk; this restores the disk state to that of the time the clone was made.
    Yes, you'll need a disk to clone to. Probably more than one, since you'll probably want to set up a rotating pool, should you need to go back to an older snapshot.
    You're undoubtedly also approaching a requirement for either an add-on storage array or storage shelf via one of your PCI-X or PCIe I/O expansion slots, or adding a FC SAN into an open PCI-X or PCIe slot and adding the external FC SAN Xserve RAID configuration that Apple sells.
    The external storage can be "just a bunch of disks (JBOD)" in an external disk shelf, or an external RAID via add-in eSATA or add-in SCSI controller, or via Fibre Channel for the Xserve RAID configuration for the FC SAN configuration, or (for lower speeds) FireWire or (lowest) USB.
    If your data is worth it and you want to deal centrally with Apple and your budget allows it (qv: "if your data is worth it"), Xserve RAID is the way to go. The external storage shelf is certainly a feasible approach -- but you'll likely end up dealing with a third-party for the configuration, set-up and third-party storage gear.
    If you're looking for things that can go wrong here, recognize that RAID only protects against drive failure. RAID does not protect against accidental nor malicious file deletions, fat-fingered operator commands, nor against application or system-level disk corruptions. RAID (only) seeks to protect against disk errors. This gets into tapes, or near-line disk copies, or other such media. External tape, or other such archival media, and particularly media that can be sent off-site on a regular basis.
    It's good that you're thinking about this stuff, too.

  • ADF rich client: How to automate testing

    Hi Experts,
    We need to do some automate testing against application that is developed with ADF Faces 11g, I've tried with loadrunner 8.1, which provides 2 approaches to record web applications, web(html / http) and web(click / script).
    According to my experiment and this article , it's not feasible to record actions to rich client app with the first protocol.
    As for the second one, according to my test, some of actions works and the others not (for example, input field, expand accordion).
    My question is, Is there a solution to fully automate adf 11g UI interactions? Solutions based on loadrunner is preferred.
    Thanks for any help,
    Todd

    Thanks CM.,
    Sorry I didn't make my question clear. Selenium does fit in AJAX well, but seems that it lacks of ability to do stress test, what if we need highly concurrent workload(say, 5000 users)? Can selenium RC or some tools else generate that?
    And, are you sure that loadrunner 8.1 is not feasible to do stress test against ADF rich client apps? I need a good reason to persuade my bosses to consider adopting new tools.
    Regards,
    Todd

  • App crashes when jumping from article with video to other article with video

    I read a thread about this issue earlier today that dates back to January in which an Adobe employee said that this issue has been fixed, but unfortunately I've been experiencing it right now. I have multiple articles that include videos and another article with many large photos. When I try navigating from one article to the other the Content Viewer crashes and goes back to the home screen. According to the thread this is known as memory issue and I followed the suggestions to reinstall the Viewer and restart the iPad but it didn't help. Right now the only way to use the app is to navigate to the Cover and continue from there which is defeating the point of having a navigation all throughout the app. I can't just take out the videos or the photos to reduce the file size. Any advice? Thanks!

    Hello
    Sometimes the Adobe Viewer, or a custom Viewer, appears to crash. However, the iOS is in fact shutting it down because of a low memory state. You can confirm whether you have this issue by syncing an affected iPad to a Mac. Then retrieve the logs to see if there are LowMemory-[time stamp].log files. Here is where such log files, as well as crash log files, is found:
    Users/[user name]/Library/Logs/CrashReporter/MobileDevice/[device name]
    Look for a LowMemory log file with a time stamp that corresponds to the application's exit. Open it in a text editor and look for a line like the following:
    viewer <42d0b24207f7d625b2fefef7549afce3>   33490 (jettisoned) (active)
    The numbers can vary. However, if the line indicates that the "viewer" process was "jettisoned" while "active," the OS closed the process because of a low memory state. The most likely reason for this is that your viewer is reading into memory images that are larger than 1024 x 1024. Apple recommends against applications loading any image larger than this size as it contributes to a low memory state on the device. It can result in what appear to be application crashes. Review your folio source files and ensure that no images of greater size are used. If they are, simply resample them, and republish your content.
    The low memory state can also be the result of having too many applications running at once. Shut down applications you are not actively using to free up more memory.
    How to free up memory on your iPad
    1. From the home screen, double-click the Home button to display recently used applications.
    2. Tap and hold one of the applications until a red minus appears above it.
    3. Tap the red minus for all apps that are not currently needed.
    4. Tap anywhere above the list of recently used apps, to return to the home screen.
    Laxman

  • Adding  fields in the  transaction  FS10N ( G/L  Account LIne Item  Display

    Hi all,
    in G/L  Account Balance Display( FS10N)  once i get list display when i click
    CUMULATIVE DISPLAY button it will go to G/L  Account LIne Item  Display list
    Here i have to add CUSTOMER field (sold to party or partner)  And NAME field in
    the list beside G/L account and  company code fileds
    Please let me know how to proceed.
    shall i have to do any back ground setting or
    i have to copy the program in to Z program
    then modify?
    urgent requirement
    please let me know
    thanks

    Hi,
    For writing a substitution exit code you will need the help of your ABAPer.
    Use T Code GGB1 to create a substitution. In the application Area select Financial Accounting --> Line item.
    Create a substitution for your company code. Under Substitution Create Step. When you click on step it will give you a list of fields in BSEG table. Select field name XREF1 or XREF2 or XREF3 any one. In the next screen select "Exit". In the Prerequisites give conditions as BKPF-BUKRS = 1000 (your company code).
    In the Substitution select your exit code against  your XREF1 or XREF2 or XREF3 field.
    While writing the exit code you can give the logic as select BSEG-KUNNR and pass to table KNA1. Then populate BSEG-XREF1 with KNA1-NAME1
    Save this substitution and activate it in T Code "OBBH"
    While writing Substitution exit copy the standard program "RGGBS000" and create your Z program. Write your new code in the Z program. This step should be done before you create a substitution. Use T Code GCX2 and maintain your Z program against Application Area "GBLS".
    Also see to it that the Reference Key1 / 2 / 3 field is made optional in all the Posting Keys and Field Status Groups.
    Please let me know whether this helps.
    Regards,
    Swapnil

  • Find obligatory fields in the transaction MM01

    Hi,
    My requirement is to create a Z table which will store all mandatory fields from the transaction MM01.
    Is there a way to know which fields are mandatory without being in the transaction MM01 or OMS9? In which system table is this information saved?
    Regards,
    Roberto

    Hi,
    For writing a substitution exit code you will need the help of your ABAPer.
    Use T Code GGB1 to create a substitution. In the application Area select Financial Accounting --> Line item.
    Create a substitution for your company code. Under Substitution Create Step. When you click on step it will give you a list of fields in BSEG table. Select field name XREF1 or XREF2 or XREF3 any one. In the next screen select "Exit". In the Prerequisites give conditions as BKPF-BUKRS = 1000 (your company code).
    In the Substitution select your exit code against  your XREF1 or XREF2 or XREF3 field.
    While writing the exit code you can give the logic as select BSEG-KUNNR and pass to table KNA1. Then populate BSEG-XREF1 with KNA1-NAME1
    Save this substitution and activate it in T Code "OBBH"
    While writing Substitution exit copy the standard program "RGGBS000" and create your Z program. Write your new code in the Z program. This step should be done before you create a substitution. Use T Code GCX2 and maintain your Z program against Application Area "GBLS".
    Also see to it that the Reference Key1 / 2 / 3 field is made optional in all the Posting Keys and Field Status Groups.
    Please let me know whether this helps.
    Regards,
    Swapnil

  • Mac OS X Server Backup solutions?

    Hi all, I have built up my network with 3 Leopard servers running on Mac Pro's and 8 mac clients (one mobile)
    Between all of them there is about 12TB's of storage space (about 2TB being used at present)
    it really is time to now invest in a complete backup solution and we won't a format that can be taken offsite (Tape sounds the best)
    I really have hardly any experience in this area and need to get some advice.
    I have an ATTO Ultra320 card in one of the Mac Pro's so need to figure out which tape drive (or drives) which software and the best way to implement it.
    All the clients are on OD so not overly bothered about backing them up, just the 3 main servers really.
    Any help would be really appreciated, thanks.

    Here's some grist for the thought mill...
    The use of "Removable" here likely (probably?) means "remote", which itself (and depending on the bandwidth of the network connection or access to couriers or such) may or may not actually be a removable disk or removable tape storage, or disk or tape libraries.
    What's the volume of the actual data? What's the rate of change of the data? There are two parts to the calculation for the creation and operation of the archive, the initial archive and the occasional (weekly, monthly, before an upgrade) full archive, and then the incremental (hourly, daily, weekly) archives. This assumes the usual two-level archive processing; an occasional full and a more frequent incremental.
    You need to figure out how big these activities are, and what your backup window is. These details then drive the available hardware and media options and then the device selection.
    Then determine (or guesstimate) the growth rate of the data. That tells you what your approach can support now, and how quickly your backup window (if you have one) might be closing. And (if you're using smaller media) when you might need to go to a larger-capacity media or to multi-volume archives. Right now, you can probably get most everything onto a 1 TB or 1.5 TB spindle, for instance.
    You'll also need to sort out the local and continuously-active processing activities, and how to get copies of those. The sqldump tool, for instance, can be the path to get a recoverable installation. And that processing tends to be part of getting a recoverable archive.
    There are cloud services around for storage and (if your change rate is less than the bandwidth "slop" available within your current network pipe) to a remote site that you manage or contract with; to a storage site or to warm site you work with, or out to an Amazon S3 or other competing storage pool. (There are various options for archiving out into Amazon S3, for instance. And Time Capsule and Time Machine is seriously slick.)
    Never assume that RAID is an archival strategy. It's not. RAID is useless against application corruptions, blown software upgrades, user errors, client and server and storage theft, and malicious-user activities. Nor against roof-mounted heat exchanger system that sprout leaks that then pour red-colored coolant into the server racks. RAID protects against disk spindle failures and (for some specific configurations) against certain other "upstream" storage controller failures.
    When you get all done, make sure you can recover and restart using your archives. Periodically test the recoverability of the archives.

  • Issue while loading berkeley database

    Hi,
    I have an issue while loading data into the Berkeley Database. When I load the xml files into Berkeley database, some files are being created. the files are named something like db.001,db.002,log.00000001 etc. I have a server running which tries to access the files .When I try to reload the Berkeley db while my server is running, I am unable to load the Berkeley database. I have to stop the server, load the berkeley DB and restart the server again . Is there any way in which I can reload the database without having to restart the server. Your response would be of help to me to find a solution to this issue. I am currently using Berkeley Database version 2.2.13
    Thanks,
    Priyadarshini
    Message was edited by: Priyadarshini
    user569257

    Hi Priyadarshini,
    The db.001 and db.002 are the environment's region files and the log.00000001 is one of the environment's transactional logs. The region files are created when you use an environment, and their size and number depend on the subsystems that you configure on your environment (memory pool, logging, transactions, locking). The log files reflect the modifications that you perform on your environment's database(s), and they are used along with the transaction subsystem to provide recoverability, ACID capabilities and protect against application or system failures.
    Is there a reason why that server tries to access these files ? The server process that runs while you load your database should not interfere with those files as they are used by Berkeley DB.
    Regards,
    Andrei

  • XServe won't boot up

    Hello,
    I have recently had to re-image the XServe G5 unit that I work with but have run into some problems. Since re-imaging whenever the unit boots up it gets to the grey apple screen with the circle running and then I get the error screen telling me that I have to reboot my computer. This happens everytime on bootup and I am unable to get to the login screen. I have tried resetting the pram and also restoring the fireware settings.
    I have since tried to re-image the drive again however the firewire drive (where my disk image is located) is not being recognized at startup. I have tried holding down option at startup but only the main drive is displayed, when I try disk target mode the firewire icon just bounces around on the screen. I verified the firewire disk on another computer and everything checked out ok. Unfortunatly booting up from a CD is not an option right now and the image is only located on the firewire drive.
    Does anyone have any ideas how to either get it to boot up correctly or how to get it to recognize the firewire drive at bootup? I do have a cluster XServe attached, perhaps should I try re-imaging the hard drive of the head node using one of the cluster units?
    Thank you for your help.

    Cloning to a co-resident partition leaves you subject to a disk spindle failure.
    A successful data archive requires some depth, in addition to completeness. Some amount of media redundancy is typical here.
    Adding a set of disks into the archival rotation here can be useful, and (for higher-value data) with one or more of these copies periodically transported to another building or to another site; preferrably out of range of fire, flood or theft.
    And RAID isn't a panacea; it (usually) allows you to survive spindle failures, but does not protect against application corruptions, various disk volume structure corruptions, nor against intentional or errant data deletions.

  • COPA cost estimate using costing sheet

    Dear Experts,
    I have done the relevant copa configuration related to Material cost estimate using costing sheet. I have maintained all the condition type required in the costing sheet.
    Can anyone plese help me out with the Transction code which I should run to do the material cost estomate using costinbg sheet. And also how I can link the estimated material cost in my Profitability analysis report.
    Thank you
    Regards
    Paul
    Edited by: Paul01 on Dec 20, 2010 1:31 AM

    Hi Paul
    What is your purpose?
    1. If you just want to fetch the material cost estimate (std cost) in COPA, then what you are doing is not correct...For this,
    a. KE4U - Valuation Strategy 001 - (DETAILS Tab) assign 001 to Qty Field ABSMG and tick "Mat Cstng" indicator
    b. KE4U - (Assignment Tab) Assign 001 to PV=01 and Record Type = F
    c. Create a Costing key in KE40
    d. Assign Costing key to mat types in KE4J or KEPC
    e. Assign Cost Comp Split to Value Field in KE4R
    2. Costing sheet in COPA is used for doing some calculations during billing and updating the same to COPA... For eg: you want to add 2% of your Std cost as Freight to COPA (Statistical value)... This can be done using Costing sheet...
    In order to use Costing sheet in COPA,
    a. Create a Costing sheet - (You already did this)
    b. KE4U - Val Strategy 001 - (DETAILS Tab)  Against Valuation Strategy 001 - Assign this costing sheet against "Application = KE"  and Qty Field = ABSMG.. Do not check "Mat Cstng" against this indicator..
    So, if you want to use Costing sheet in COPA, your KE4U may have 2 entries under the "DETAILS" tab..
    (1) As per 1.a and 1.b above
    (2) As per 2.b above
    Regards
    Ajay M

  • Why do I need to have and use a keychain if I am the only person to every use my Mac?

    Is netflix the best for movies?

    Netflix is a great service to use, and works well on AppleTV, any IOS device and your computers. Netflix is great for TV shows by season, and usually get the latest movies 30 days of their release.
    Keychain is also there for security against applications accessing passwords, not just users.
    The keychain service found on Mac OS X is significant for two major reasons. First, it uses strong cryptography. Over time, many applications have provided the capability to store passwords but have failed to do so in a way that can be trusted or considered safe from compromise. Second, the storage and retrieval of all passwords is handled by a single service, not individual applications. After you have decided to trust this service, any time you are asked to store a password you don’t have to worry about the means by which it is protected. What you do need to be careful of, though, is what applications you allow to make use of your keychain(s). When you unlock a keychain and allow an application to access a password, you open the door for that application to slip up and compromise the security of that password (more on this later). Although the keychain provides strong secure storage for passwords, any application can make use of the keychain API. Thus, be mindful of the applications for which you grant access to your keychain(s)
    NOTE: You are in the AppleTV section, but the question was about a MAC. To get more coverage on your question, please post related questions into their designated field.

  • Popup closing even with validation errors in IE - works correctly in FF

    Hi,
    Using JDeveloper 11.1.1.4.
    We have a page where the user can upload files via a popup. After selecting the file to upload and submitting, the file type and size are validated against application parameters. If the validation fails, the inputFile component is set invalid and a ValidationException is thrown. In Firefox, the functionality is as expected: the popup stays open and the validation error message is displayed next to the inputFile component. In Internet Explorer, however, the popup closes without any error being displayed.
    Popup:
    <af:popup id="pt_fileUploadPopUp" contentDelivery="lazyUncached"
                            binding="#{FileUploadOperations.uploadFormPopup}">
    <af:inputFile label="#{UC1315ResourceBundle['L1336']} "
                                      id="pt_i3f1"
                                      value="#{FileUploadOperations.uploadedFile}"
                                      requiredMessageDetail="#{UC1315ResourceBundle['REQ_MSG_FILE_UPLOAD']}"
                                      validator="#{FileUploadOperations.validateFile}"
                                      required="true"/>
    <af:commandButton id="pt_CB_2001"
                                                text="#{UC1315ResourceBundle['BUTTON_2007']}"
                                                actionListener="#{FileUploadOperations.uploadFile}"
                                                partialSubmit="true"
                                                disabled="#{pageFlowScope.pApplicationMode == 'PREVIEW'}"/>
    </af:popup>Validator method:
        public void validateFile(FacesContext facesContext,
                                 UIComponent uIComponent, Object object) {
                try {
                    // validation logic
                } catch (CustomFileUploadException exp) {
                    inputFileComponent.resetValue();
                    inputFileComponent.setValid(false);
                    FacesMessage facesErrorMessage = new FacesMessage();
                    facesErrorMessage.setSeverity(FacesMessage.SEVERITY_ERROR);
                    facesErrorMessage.setDetail(exp.getMessage());
                    facesErrorMessage.setSummary(exp.getMessage());
                    throw new ValidatorException(facesErrorMessage);
        }Any ideas what might be the problem here?
    Thanks,
    Joonas

    @Srinivas, I changed the component inside the popup to af:dialog (was af:panelWindow before) but this didn't solve the problem - the behavior is the same. Thanks for the suggestion though.
    @Frank, seems this problem is implementation-specific. I created a simple test case and the validation error was displayed correctly in IE.

  • Synchronize MSAD and essbase groups and users automatic

    I know it is not possible to import to Essbase users and groups directly from MSAD or LDAP. All users are deffined with external authentication.<BR><BR>There is a way to create an application which make this synchronization possible ?<BR><BR>May I have some samples?<BR><BR>Thanks.<BR>

    If you are using LCM and groups are native and users are external provisioned against the groups then you will need to export the groups, if you want the provisioning against applications then you will also need to take that across.
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for

  • How can i mask a URL parameter

    I have a instant messaging app and i need to mask the URL parameters that uniquely identify the message (autonumber in access db) so that a user cant change the parameter number in the browser click refresh and delete a different message# How can i m

  • Freight Charges In Purchase Order

    Hello Friends,                I m working on Purchase order.I am using Freight charges which is given in the footer of PO form.I want to know about where this amount get stored and what is object ID of Freight Charges??The same functionality I want t

  • Syntax error while doing Convert data in LSMW

    Hi experts,       I am using direct input method for uploading customer master records. After display read data step, when i click Convert data button, i get a runtime error saying that:       the data object "LEGACY_CUSTOMER_MASTER" has no component

  • Syncing won´t work anymore (10.7.4)

    Here´s a mystery for all of you to sink your teeth in... this i show it started I noticed I couldn´t get into the appstores update section on my ipad2. I tried to sign out of the store and back in. Restarted the Ipad and so on. Same result. No acess

  • Has anyone seen this before,? some import operations were not performed , could not copy the file to requested location

    some import operations were not performed , could not copy the file to requested location , has anyone seen this or have a fix?