Routing error in Application Best Practice App (TDG) ?

Hi All,
i try Application Best Practices ...
but if i select always another product ...
the category and the Vendor are always the same. But if we check the json files there
are different.
Or should it not work in mock modus?
regards
Bernard

Hi,
I'm not talking about the add. if you select a product ....
the Vendoraddress is always the same ... but in the mockdata json file is different !
cheers
Bernard

Similar Messages

  • Slow starup of Java application - best practices for fine tuning JVM?

    We are having problems with a java application, which takes a long time to startup.
    In order to understand our question we better start with some background info. You will find the question(s) after that.
    Background:
    The setup is as follows:
    In a client-server solution we have a win xp, fat client running java 1.6.0.18.
    (Sun JRE). The fat client containt a lot of GUI, and connects to a server for DB access. Client machines are typical 1 to 3 years old (there are problems even on brand new machines). They have the client version of JRE - standard edition installed (Java SE 6 update 10 or better) Pretty much usual stuff so far.
    We have done a lot of profiling on the client code, and yes we have found parts of our own Java code that needs improving. we are all over this. Server side seems ok with good response times. So far, we havent found anything about shaky net connections or endless loops in the java client code or similiar.
    Still, things are not good. Starting the application takes a long time. too long.
    There are many complicating factors, but here is what we think we have observed:
    There is a problem with cold vs. varm starts of the application. Apparently, after a reboot of the client PC - things are really, really bad - and it takes (sometimes) up to 30-40 secs to start the application (until we arrive at the start GUI in our app).
    If we run our application, close it down, and then restart
    without rebooting, things are a lot better. It then usually takes
    something like 15 - 20 sec. which is "acceptable". Not good, but acceptable,
    Any ideas why?
    I have googled it, and some links seems to suggest that the reason could be disk cache. Where vital jar are already in disk cache on th warm start? Does that make any sense? Virus scanners presumable runs in both cases.
    People still think that 15 - 20 sec in start up on the warm start is an awful long time, even though there is a lot, a lot, of functionality in the application.
    We got a suggestion to use IBMs JRE - as it can do some tricks (not sure what) our SUN JRE cant do concerning the warm and cold start problem. But thats is not an option for us. And noone has come up with any really good suggestions with the SUN JRE so far?
    On the Java Quick Starter (JQS) -
    improves initial startup time for most java applets and applications.
    Which might be helpful? People on the internet seem more interested
    in uninstalling the thing than actually installing it though?
    And it seems very proprietary, where we cant give our Jar files to it?
    We could obviously try to "hide" the problem in some way and make it "seem" quicker. Where perceived performance can be just as good as actual performance. But it does seem a bad solution. So for the cold start we will probably try reading the jar files and thereby have them in disk cache before startup of our application. And see if that helps us.
    Still, ok the cold start is the real killer, but warm start isn't exactly wonderfull either.
    People have suggested that we read more on the JVM and performance.
    java.sun.com.javase/technologies/performance.jsp
    java.sun.com.docs/hotspot/gc5.0/gc_tuning_5.html
    the use of JVM flags "-Xms" "-Xmx" etc etc.
    And here comes the question .. da da ...
    Concerning various suggested reading material.
    it is very much appreciated - but we will like to ask people here - if it is possibe to get more specific pointers. to where the gold might be buried.
    I.e. in a an ideal world we would have time to read and understand all of these documents in depth. However, in this less than ideal world we are also doing a lot of very timeconsuming profiling in our own java code.
    E.g. java garbage collection is is a huge subject - and JVm settings also. Sure, in the end we will probably have to do this all very thoroughly. But for now we are hoping for some heuristics on what other people are doing when facing a problem like ours..?
    Young generation, large memory pages, garbage collection threads ect. all sounds interesting - but what would you start with?
    If you don't have info to decide - what kind of profiling would you be running and then adjust what JVM setting in your trials?
    In this pressed for time scenario. Ignorance is not bliss. But makes it hard to pinpoint the or those JVM parameters to adjust. So some good pointers from experienced JVM "configurators" will be much appreciated!
    Actually, If we can establish that finetuning of these parameters is a good idea, it will certainly also be much easier to allocate the time for doing so. - reading, experimenting etc. in our project.
    So, All in all , what kinds of performance improvements can we hope for? 5 out of 20 secs on the warm start? Or is it 10 % nitpicking? Whats the ball park figure for what we can hope to achieve here given our setup? What do you think based on above?
    Maybe someone out there have done some finetuning of JVM parameters in a similiar PC environments like, with similiar fat clients...? Finetuning so and so - gave 5 secs. So start your work with these one-two parameters?
    Something like that - some best practices? Thats what we are hoping for.
    best wishes
    -Simon

    Thanks for helpful answer from both you and kajbj.
    The app doesn't use shared network drives.
    What are you doing between main starts to get executed and the UI is
    displayed?
    Basicly, Calculating what to show in the UI. Accessing server - not so much, there are some reads from a cache, but the profiling doesnt indicate that it should be a problem. Sure, I could shift the startup time to some other slot, but sofar I havent found a place where the end-user wouldnt be annoyed.> Caching of something would seem most obvious. Normal VM stuff >seems unlikely. With profiling i basicly find that ''everything'' takes a lot longer in the cold start scenario. Some of our local java methods are going to be rewritten following our review. But what else can be tuned?You guys dont think the Java Quick Start approach, with more jars in disk cache will give something? And how should that be done/ what does people do?I.e. For the class loader I read something about
    1.Bootstrap class loader
    2.Extensions class loader
    3.System class loader
    and is wondering if this has something to do with the cold start problem?
    The extensions class loader loads the code in the extensions directories (<JAVA_HOME>/lib/ext
    So, we should move app classes to ext? Put them in one jar file? (We have many). Best practice about that?
    Otherwise it seems to me that it must be about finetuning the JVM?
    I imagine that it is a question about:
    1. the right heap size
    2. the right garbage collection scheme
    Googling heap size for XP
    CHE22 writes:
    You are right; -Xms1600M works well, but -Xms1700M bombs
    Thats one best practice or what?
    On garbage collection, there are numerous posts, and much "masters of Java black art" IMHO, And according to profiling GC is not really that much of a problem anyway? Still,
    Based on my description I was hoping for a short reply like "try setting these two parameters on your xp box, it worked for me" ...or something like that. With no takers on that one, I fear people are saying that there is nothing to be gained there?
    we read:
    [ -Xmx3800m -Xms3800m
    Configures a large Java heap to take advantage of the large memory system.
    -Xmn2g
    Configures a large heap for the young generation (which can be collected in parallel), again taking advantage of the large memory system. It helps prevent short lived objects from being prematurely promoted to the old generation, where garbage collection is more expensive.
    Unless you have problems with pauses, try granting as much memory as possible to the virtual machine. The default size (64MB) is often too small.
    Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the virtual machine. On the other hand, the virtual machine can't compensate if you make a poor choice.
    The -XX:+AggressiveHeap+ option inspects the machine resources (size of memory and number of processors) and attempts to set various parameters to be optimal for long-running, memory allocation-intensive jobs]
    So is Setting -Xms and -Xmx and -XX:AggressiveHeap
    best practice? What kind of performance improvement should we expect?
    Concerning JIT:
    I read this one
    [the impact of the JIT compiler is obvious on the graph: at startup the time taken is around 500us for the first few values, then quickly drops to 130us, before falling again to 70us, where it stays for 30 minutes,
    for this specific issue, I greatly improved my performances by configuring another VM argument: I set -XX:CompileThreshold=50]
    The size of the cache can be changed with
    -Xmaxjitcodesize
    This sounds like you should do something with JIT args, but reading
    // We disable the JIT during toolkit initialization. This
    // tends to touch lots of classes that aren't needed again
    // later and therefore JITing is counter-productiive.
    java.lang.Compiler.disable();
    However, finding
    the sweet spots for compilation thresholds has been tricky, so we're
    still experimenting with the recompilation policy. Work on it
    continues.
    sounds like there is no such straigth forward path, it all depends...
    Ok, its good, when
    [Small methods that can be more easily analyzed, optimized, and inlined where necessary (and not inlined where not necessary). Clearly delineated uses of data so that usage patterns and lifetimes are apparent. ]
    but when I read this:
    [The virtual machine is responsible for byte code execution, storage allocation, thread synchronization, etc. Running with the virtual machine are native code libraries that handle input and output through the operating system, especially graphics operations through the window system. Programs that spend significant portions of their time in those native code libraries will not see their performance on HotSpot improved as much as programs that spend most of their time executing byte codes.]
    I have the feeling that we might not able to improve performance that way?
    Any comments?
    otherwise i was wondering about
    -XX:CompileThreshold=50 -Xmaxjitcodesize (large, how large?)
    Somehow, we still feel that someone out there should have experienced similiar problems? But obviously there is no guarantee that the someone should surf by here!
    In c++ we used to just write everything ourselves. Here it does seem to be a question about the right use of other peoples stuff?
    Where you are kind of hoping for a shortcut, so you dont have to read endless number of documents, but can find a short document that actually addresses your problem ... well.
    -Simon
    Edited by: simoncpm on Mar 15, 2010 3:43 PM
    Edited by: simoncpm on Mar 15, 2010 3:53 PM

  • SOA composite application best practice

    Hi All,
    We are running SOA Suite 11g. One of my colleagues said that we should always have a mediator in our composite applications instead of just exposing the BPEL Process as a SOAP service. Is this a correct statement? If so why is that good practice. Still trying to grasp the concepts and best practices for SOA so any information is greatly appreciated.
    Thanks,
    S

    if you place a mediator in between them, you can change the bpel interface without having to change your composite soap interface
    that's one thing which could be a best practice

  • Large ADF Applications - Best Practice

    Hi
    We have a single ADF project (one model, one view/controller) with the following model components:
    68 AMs
    387 VOs
    175 EOs
    This project is ever expanding, and we are suffering some well-known performance problems when open JDeveloper or open the view/controller project.
    Are there any best-practice guidelines on how to structure your ADF projects?
    e.g. what is the maximum recommended number of AMs/VOs/EOs in a single project?
    We have kept everything in a single project after some advice from Oracle to help us to re-use common modules easily.
    We use JDeveloper v10.1.3 and use JHeadstart v10.1.3 SU 1 to generate our view/controller layer, using multiple faces-config files.
    Thanks
    Denis

    Hi Denis,
    We have exactly the same problem in expanding our application but we have a single AM and less EOs, VOs than you right now. There are some threads discussing this issue but I haven't found a complete and standard solution yet. The only thing I know is that almost every thing can be segmented in ADF like projects, application module, faces-config.xml, ...
    Another thing which is very important is that segmenting an application is a tradeoff, it has some advantaes but problems in SCM, security, ...
    S/\EE|)

  • Hotfix Application Best Practices

    I have a twofer with regards to applying a hotfix. We deployed Config Manager 2012 RTM, upgraded to SP1, and then upgraded to R2. We have never applied a hotfix or CU before, so there is a bit of mysteriousness with regards to what the best practices are.
    We are applying hotfix 2910552 to address slow imaging speeds. These questions are pretty basic but I wanted to get some informed opinions.
    What is the best rollback procedure in the event of problems? I consider the hotifx a low risk but there is some concern from some others above me. We are planning on taking snapshots of the 3 site servers and the DB server in our hierarchy, but not the
    DPs. Does this seem sound or is there a better technique?
    How essential is it to update the clients in our environment in a timely fashion, or at all? I am going to have the packages created, but I did not know if I should deploy them immediately or not. Our server group has some concerns about applying the patch
    to the config manager clients on our servers during our patching windows.
    Any insight is appreciated. Thanks!
    Bryan

    There's more risk in taking snapshots as they are completely and explicitly unsupported and almost certainly would cause issues particularly since your DB is separate from your site server.
    Rollback is simply uninstalling the hotfix. That hotfix addresses a very niche issue that only manifests itself during OSD, thus, it's only important roll it out to clients before you reimage them in a refresh scenario. An alternative rollback is simply
    reinstalling the site and restoring your DB. This sounds painful and while it would take a bit of time, it's actually rather painless and works quite well.
    This all begs the question though, you really should just do CU3. There are tons of other meaningful and impactful fixes in the CUs that will improve the overall stability and even functionality of the site and clients.
    Concerns for applying hotfixes should be addressed by performing the update in a lab first. There is no other way to comfort risk averse folks except by showing them that it works. Additionally, you put forth evidence from the community that CU application
    to ConfigMgr is almost always smooth and uneventful. Can something go wrong? Of course. I could get hit by lightning sitting in my chair but that doesn't mean I stay in bed all day.
    Jason | http://blog.configmgrftw.com | @jasonsandys

  • Web application best practice...

    I'm creating a web site with heaps of Flash stuff on it. I'd
    like it to load plenty of information from a database server and
    save plenty of information back again. It's not a vast amount of
    information by the way - i don't need to load video clips and
    things, but there will be loads of different flash resources on the
    site and so plenty of different occasions when there'll need to be
    some loading and saving going on.
    So to the question - what's best to use?
    1) LoadVars? (or XML Object) - I'm pegging this option
    together as I can see that loading some of the info in XML format
    from say an ASP page could be a good idea
    2) Web services? I don't know much about webservices, but
    plenty of big companies seem to be offering them out - google,
    yahoo and flickr for example (- but dya reckon they use them
    themselves?) I realise i'd have to learn something a bit different
    like .net (so maybe VB.net or c# or something any opinions about
    which to go for there too??) or maybe perl or python or
    something...
    3) Flash Remoting? I have bad feelings about this one - i
    don't want to pay for extra stuff if it doesn't do much for me -
    and i understand that with this one i'll still need my server side
    application anyway - so it makes me wonder what the point is? And
    I've also heard that the latest flash version doesn't really
    support it very welll.....
    So does anyone have any thoughts? I'd love to hear some
    opinions... I DO care about performance and I DO care about how
    fussy and complicated the programming will be. My gut reaction is
    that web services are the way to go, because to my naive mind they
    seem like they'll be simpler to code and potentially have less bugs
    and therefore be more reliable. But then I'm not trusting my naive
    mind, I'm asking you clever forum types instead!
    Best Wishes,
    Neil

    I don't think I can offer any definitive advice. Partly
    because it should be based on what your requirements are longer
    term I guess, and (probably mostly) because I don't know that much
    about it myself. I'll share what little I know.
    For flash, in terms of my understanding remoting is the
    fastest/most efficient means of communicating with a service you
    expose on your server , and I'd assume its more scalable as an
    approach longer term. There's the Adobe versions with Coldfusion
    and, I think .net, but there are open source options e.g. AMFPHP as
    well. I think that CS3/as3 will ultimately have no problems with
    remoting - I read somewhere that although the remoting components
    are not there now, flash cs3 can use flex non-visual components and
    also somewhere else that the remoting components are not necessary
    for remoting to be possible (but I guess they make it easier). I
    have no idea whether either is true, but I'd be surprised if long
    term flash cs3 can't do remoting as well as previous versions. The
    amf encoding method that is used for remoting is more readily
    accessible in as3 I think, so I can't see it being a problem.
    There's another framework for a type of remoting based on
    what I assume is some form of XML serialisation called XMLPC , but
    I don't know much about it.
    Both flash remoting and XMLRPC give you the ability to not
    worry about how the data is translated and transferred between
    flash client and server. You just deal with it in the native data
    structures in a similar way to making function/method calls
    locally. LoadVars is great if you just want to transfer name value
    pairs...eg sets of variables, XML is great for transferring
    structured representation of data. I've become more accustomed to
    working with XML as it is and using it as a the basis for my data
    in flash when I need to use it, which saves having customer
    encoding/decoding functions to change the way its dealt with
    locally. This is not always possible or convenient... but with
    XPathAPI in as2 it is a little easier, and I'm looking forward to
    using the new CS3 xml representation.
    LoadVars and XML are also great and if you're starting out,
    and are probably essential in terms of having them as an option
    anyhow so may not be a bad option to begin with just so you're
    familiar with them. I would suggest that you always know how to
    work with these approaches and maybe they'll be enough for you for
    now anyway .
    I know nothing about SOAP or any other type of webservice.
    Don't know if that helps.

  • Schema Design for Worklist Application - best practice?

    Hello,
    we are designing the Schema for a workflow application now. I'm wondering what kind of XML Schema would be best suited for the JSP generation of the Workflow Wizard.
    So far I've found out with some tests (please correct me if I'm wrong):
    - Only elements will be mapped to JSP fields, not attributes
    - If elements have single-letter name, the field label will be eliminated totally in JSP (bug?!)
    - For EVERY parent node, an HTML table is generated in the JSP containing all the simple nodes in the parent. If a parent node contains another parent node, both tables will be generated on the same level.
    And I haven't found any way to create drop-down list or checkbox/radiobuttons out of the XSD definition (enumeration as element type).
    I would really appreciate it if someone could share some experience in this area, many thanks in advance!
    regards
    ZHU Jia

    Hello,
    we are designing the Schema for a workflow application now. I'm wondering what kind of XML Schema would be best suited for the JSP generation of the Workflow Wizard.
    So far I've found out with some tests (please correct me if I'm wrong):
    - Only elements will be mapped to JSP fields, not attributes
    - If elements have single-letter name, the field label will be eliminated totally in JSP (bug?!)
    - For EVERY parent node, an HTML table is generated in the JSP containing all the simple nodes in the parent. If a parent node contains another parent node, both tables will be generated on the same level.
    And I haven't found any way to create drop-down list or checkbox/radiobuttons out of the XSD definition (enumeration as element type).
    I would really appreciate it if someone could share some experience in this area, many thanks in advance!
    regards
    ZHU Jia

  • Best Practice - App installation location?

    After switching from Windows to Mac I've tried to be pretty security concious with how I run things. My user account doesn't have Admin, and I rarely have to escalate to Admin on my system. However, there is something that is bothering me.
    I noticed that all bundled apps are in the root Applications folder. This makes sense, as this makes them available to all users. But when installing new apps, they don't go into the Applications folder in my Home folder, instead also defaulting to root. (in fact, not a single app is in my Home folder, now that I check)
    Is this normal operation? I've noticed a lot of installers let you pick a target volume, but then dump the app straight to the root apps folder. Should I be re-directing these to my Home folder? Will this adversely affect performance?
    It just seems a little odd to me.

    I've never installed anything into Applications in my home folder (~/Applications).
    But +in principle+ (but I've only done the one...), the difference is:
    If you install it into the system-wide Applications folder (/Applications) then any user of your computer can use it. But installation requires an administrative password. Some applications will require this.
    If you install into ~/Applications, then only you can use the application, but you do not need an administrative password to install. You (or any malware that is running as you) would also be able to change it.
    Since you are paranoid about security, you should also note that if you installed an application into /Applications by dragging the application into the Applications folder and then typing an administrative password (i.e. no installer) then that application is owned by your nonadministrative user (at least in Tiger). This means you (or any malware that you have inadvertently launched) can modify the application without an administrative password which is bad. You can check this by (single)clicking the application in Finder and then typing Command-I (for Info) and looking at the ownership and permissions information (near the bottom of the Info window). Hopefully your administrative user, and not your nonadmin one, owns the application. If not, there might be a GUI way of "fixing" this, but the only way I know is to open a Terminal and type
    <pre>
    sudo chown -R admin:admin /Applications/application.app/
    sudo chmod -R go+Xr /Applications/application.app/
    </pre>
    where the first admin is the (short) name of your administrative user, and application is the name of the application. The second command (chmod) should strictly not be necessary, but shouldn't hurt.
    The first sudo command will prompt for your admin password and you have to already be an admin to use it. If you are launching the Terminal as your nonadmin user, then the sequence of commands must be preceded by
    <pre>
    su admin
    </pre>
    which will also prompt for your admin password.

  • MCX to Configure Client Dock Applications best Practice?

    I want to use WGM preferences to manage which applications show up on a user's dock when they login. The server does not have the same applications installed as the client machines thus do not show up as an available option to add using WGM.
    _The Question:_ *Is is bizarre to install client applications on the server so it may be available to to add to a user's dock preference in WGM?*
    Currently I'm using changes I've made to the default user template as a work around to getting the dock configured to our needs. Not my favorite way to administer 6 labs, but functional right now. If it's okay, I was thinking about installing all the software we use on our client machines on the server.
    Is there a better way of managing the dock for clients in a multiple lab environment?

    Run the WGM application from a workstation that has the same applications installed in the same locations as on your client workstations. Do not make changes to the user templates on the server or the workstations. When setting the preferences for the user's dock, do not select the checkbox, 'Merge with User's Dock', as that combines the customized settings you create with the dock settings from the user template.

  • Best Practices error

    Dear Team members
    I am getting following error while activating Best Practices for Discrete Mfg
    Start activation eCATT: /SMB15/DEF_PAR_PROC_O003_J06
    Not activated - error
    Success: 00000 Errors: 00001 eCATT Procedure Serial Number: 0000000136
    End of activation eCATT: /SMB15/DEF_PAR_PROC_O003_J06
    Thanks & Regards,
    Sushma

    Hi,
    You need to use the 'Solution Builder' to activate.
    T. code /n/smb/bbi -> Click on implementation.
    Then choose the building blocks (as per your scope) & activate. If there are some errors, you need to manually fix them. (what it means is, either you do the config. manually in IMG or change the relevant .txt files (responsible for particular BC set/eCATT) & re-run the activation.

  • The application "Microsoft Word.app" can't be opened in Microsoft Office

    I am getting the error "The application “Microsoft Word.app” can’t be opened." I have Microsoft Office for Mac. Help!

    Your MS Office may have become corrupted.  Reinstalling Office can address problems that you may be experiencing.
    I don't know what version of Office for the Mac that you have, 2008 or 2011.
    The current version of Office 2011 is 14.4.9.
    First - find your Office 2011 install disc with the product key # - and only then remove Office according to MS instructions
    http://support.microsoft.com/kb/2398768
    Or go here to DianefromOregon's site for help removing Office 2011:
    http://www.officeformachelp.com/2012/12/office-for-mac-2011-remove-office/
    Then Reinstall from DVD
    Then enter your Product #
    After successfully reinstalling Office 2011,  update your Office product using the Software update within Office called Microsoft Update or going to the Help menu within Word or Excel or PowerPoint and select Update.
    It may take a few times before you get to final upgrade to current Office version.
    It should be 14.4.9 as of current date.
    How to locate product keys
    http://support.microsoft.com/kb/2279109
    or here on locating product keys
    http://office.about.com/od/MicrosoftOfficeMac/a/Best-3-Ways-To-Find-Microsoft-Of fice-For-Mac-Key-Codes.htm
    Or
    tp://try.officeformac.com/store?Action=ContentTheme&Locale=en_US&SiteID=msmacus& pbPage=CSTable&resid=VSGpKAoydBAAAINBCDQAAABU&rests=1428269373647
    Note is updated as of April 27, 2015

  • Office Web Apps - Best Practice for App Pool Security Account?

    Guys,
    I am finalising my testing of Office Web Apps, and ready to move onto deploying it to my live farm.
    Generally speaking, I put service applications in their own application pool.
    Obviously by doing so this has an overhead on memory and processing, however generally speaking it is best practice from a security perspective when using separate accounts.
    I have to create 3 new service applications in order to deploy Office Web Apps, in my test environment these are using the Default SharePoint app pool. 
    Should I create one application pool for all my office web apps with a fresh service account, or does it make no odds from a security perspective to run them in the default app pool?
    Cheers,
    Conrad
    Conrad Goodman MCITP SA / MCTS: WSS3.0 + MOSS2007

    i run my OWA under it's own service account (spOWA) and use only one app pool.  Just remember that if you go this route, "When
    you create a new application pool, you can specify a security account used by the application pool to be either a predefined Network Service account or a managed account. The account must have db_datareader, db_datawriter, and execute permissions for the content
    databases and the SharePoint configuration database, and be assigned to the db_owner role for the content databases." (http://technet.microsoft.com/en-us/library/ff431687.aspx)

  • Supported/Best Practice to restore Planning application between servers

    I have two servers with Hyperion Planning 9.3.1 (prod and dev) I want to copy the application called 'BFS' from Production to 'NewBFS' - Dev server.
    As per our consultants they indicated to do the following:
    1. Backup the repository database containing application BFS from production
    2. Do a restore of the .bak file to 'NewBFS' database on dev server
    3. Resync orphan logins (from sql server logins and database logins)
    4. Log into Planning via the default admin user ID
    5. Go to application settings and change the URL
    6. Register the Shared Services URL
    7. Manage Database
    8. Check all boxes and click refresh
    9. Go to Shared Services and resync native directory users
    However when try to log into planning with something other than 'admin' we receive an error that 'user xx is not provisioned ...'
    From my db experience the user tables are either still referencing production and/or have not resynced properly.
    So long story short...can I restore one Planning app to another server and if so what is the supported/best practice?
    thanks
    JTS

    HemanthK ,
    This is what we do to restore one planning app from prod to test. And now it looks like it works: We are MS SQL Server so these instructions are based on SQL DB
    1. Backup the Planning Application SQL DB on Production
    2. Stop the Shared Services and Planning Services on Development
    3. Restore the Production Planning App SQL DB
    4. Reset orphan logins (SQL script is available at other sites, just google Reset orphan users SQL)
    5. Restart the Shared Service first then Planning Service on Development
    6. Log in development as default admin to Planning, go to the new app, choose Manage Application Settings
    7. Run the script John had indicated c:\Hyperion\Planning\bin>Updateusers.cm servername adminname adminpassword applicationname (no dashes before each parameter)
    8. Go to planning as admin again, choose a form and choose assign access, choose add access, choose migrate identities. Here is a good self-check to see that the users are only the users from development box
    9. finally go to manage databases in planning, choose refresh database and security
    10. choose manage security filters it should now reflect only development users
    11. login as a development user into planning to ensure correct security/user rights...
    You should now be able to access the application on the development box.

  • Mobile App Best Practice When Using SQLite Database

    Hello,
    I have a mobile app that has several views.
    Each view calls a different method of a Database custom class that basically returns the array from a synchronous execute call.
    So, each view has a creationComplete handler in which I have something like this:
    var db:Database=new Database();
    var connectResponse:Object=db.connect('path-to-database');
    if(connectResponse.allOK)//allOK is true if connection was succesful
       //Do stuff with data
    else
       //Present error notice
    However this seems reduntant. Is it OK to do this once (connect the the database) in the Main Application file?
    The do something like FlexGlobals.topLevelApplication.db?
    And even generally speaking, constants and other things that I would need throughout the app, can be placed in the main app? As a best practice, not technically as technically it is possible.
    Thank you.

    no, I only connect it once
    I figured I wanted several views to use it so made it static and singleton as I only have 1 database
    I actually use synchronous calls but there is a sync with remote mysql database function, hence the eventdispatcher
    ... although I am thinking it might be better to use Async and dispatch a custom event and have the relative views subscribe

  • Best practices on number of pipelines in a single project/app to do forging

    Hi experts,
    I need couple of clarification from you regarding endeca guided search for enterprise application.
    1)Say for example,I have a web application iEndecaApp which is created by imitating the jsp reference application. All the necessary presentation api's are present in WEB-INF/lib folder.
    1.a) Do I need to configure anything else to run the application?
    1.b) I have created the web-app in eclipse . Will I be able to run it from the any thirdparty tomcat server ? If not then where I have to put the war file to successfully run the application?
    2)For the above web-app "iEndecaApp" I have created an application named as "MyEndecaApp" using deploymenttemplate. So one generic pipeline is created. I need to integrate 5 different source of data . To be precise
    i)CAS data
    ii)Database table data
    iii)Txt file data
    iv)Excel file data
    v)XML data.
    2.a)So what is the best practice to integrate all the data. Do I need to create 5 different pipeline (each for each data) or I have to integrate all the 5 data's in a single pipeline ?
    2.b)If I create 5 different pipeline then all should reside in a single application "MyEndecaApp" or I need to create 5 difference application using deployment template ?
    Hope you guys will reply it back soon..... Waiting for your valuable response ..
    Regards,
    Hoque

    Point number 1 is very much possible ie running the jsp ref application from a server of your choice.I havent tried that ever but will draw some light on it once I try.
    Point number 2 - You must create 5 record adapters in the same pipeline diagram and then join them with the help of joiner components. The resultant must be fed to the property mapper.
    So 1 application, 1 pipeline and all 5 data sources within one application is what should be the ideal case.
    And logically also since they all are related data, so must be having some joining conditions and you cant ask 5 different mdex engines to serve you a combined result.
    Hope this helps you.
    <PS: This is to the best of my knowledge>
    Thanks,
    Mohit Makhija

Maybe you are looking for

  • How can I pass parameters directly to report in web envirnment

    Hi I want to call reports by skipping the parameter form in web enviroment. for example I m caling a report inv.rep with parameter userid=1 what I want is when user clicks the button(on oracle forms) it will get that report sort on userid=1 without d

  • How to alter behaviour of pressing enter key to tab key?

    Hi! I want to alter the behaviour when pressing the enter key so it acts like pressing the tab key in a JTextField. That is pressing the enter key will transfer the cursor to the next field instead of the cursor just staying in the current field. Tha

  • Export to tif from Lightroom vs save as tif from Photoshop

    Photoshop 10.0.1, Lightroom 2.0, Windows XP Client wants tif format file. Exporting 34.9 Mb dng file from Lightroom generated 71.6 Mb tif file. File save from Photoshop generated 35.8 Mb tif file. Both saves without compression. Is there less informa

  • Flickr upload not working

    Since updating from iPhoto to Photo, I no longer can upload to flickr.  I try Share>Flickr & get "Exporting # of # items for sharing" message and then nothing happens!  I tried and logged in Flickr to see if anything was there, and nothing.  Any help

  • Same sql query with multiple database links

    Hi All, i want to execute an sql query for a SELECT LIST Item. which should use database links in it's query. i'm having a list of database links in the region. say :LOC which is having 10 items each linking to different databases. i want to use foll