Web console - delegating permissions correctly - Advice on best practice

Hi,
I'm in the process of rolling out the Orchestrator Console for wider use within our department.  After reading some posts on Console delegation I have been able to set a group up which when added to the Orchestrator root and Sub dirs, allows the folder
views to be controlled, but only to some degree/very basic.
What I mean - or what I'm finding is that I have what is turning out to be quite a 'deep' tree/subtree folder structure for my runbooks (currently about 4 levels) eg from root folder Runbooks->ProductionRunbooks->ServiceDeskRunbooks->ExchangeRunbooks
- containing 2 exchange runbooks.
So for this structure to delegate to Service desk staff I have created a security group (service desk_console) and given the basic Read permission at Runbook and ProductionRunbooks folder and then Full control (inc child Objects) at ServicedeskRunbooks folder
to allow exectution of any runbooks below this level.
My query is is this the way it should work - I initially thought I could set the read permission at the top level but then just at the Full control permission on the specific low level folder but this didnt work - I had to apply the read permission at each
of the folders between root at target folder.
So as the number of runbooks/folders grows and the possible mix of user groups who will require access to run a particular runbook I can see the delegation of permissions becoming very messy using the method I currently got working - ie with potentially
several 'user groups' I will have to basically set explicitly the permissions for each user group at all levels on all folders?
A possible solution I'm thinking of is to create a 'general console users' group and add the specific user groups to that (eg service desk,Exchange Team,VDI Team) to then set the read permissions on root Runbooks and Production Runbooks folders and then
set Full control specific for the user groups on the folders containing the runbooks pertaining to that user group - any runbooks required by multiple groups could be set in a 'general folder with all groups having FC permissions to it.
Thats my thoughts - seems a bit messy to me but just interested to hear and confirm that thats just the limitation and way console delegation is supposed to work or if there is a neater way I'd like to know!!
Cheers - PS I know this descended into a bit of a ramble/discussion in my own head so apologies ;-)

Hi Stefan, thanks for your reply and suggestion.  What I probably didnt explain, and what I was hoping to achieve in the delegation model was to try and only make visible the folders/runbooks to the relevant operators/user groups.
The issue probably stems from me having a pretty messy folder structure (generally) and me wanting to hide that mess and confusion from operators who will be new to the console.  Basically I have a high level folder called production which underneath
that I create neat and tidy folders/runbooks following a good naming convention - only production ready stuff goes in here and this is the focus of what I want to make visible and control access to.  However I also have High level folder for PreProduction
and Also Testing and within those are a very large number of Folders/runbook which dont follow good naming and can easily loose track when multiple folders are expanded fully.
So my issue with doing the List permission and let it be inherited down the tree then I assume I will be giving the console user the full (list) view of that structure even if they cant execute and runbooks.
So is the only way to enforce views/ and run permission to specify explicit permissions accordingly at each level in the tree, ie you can't skip setting folder permissions at some of the in between 'organizational type' folders - eg from my example above
the ProductionRunbooks->ServiceDeskrunbooks folder/subfolder are just to logically organize the folders containing runbooks such as Exchangerunbooks.  Ideally I would like to set the permissions in such a way that allows the service Desk group to view
the runbooks at the Ecxhangerunbooks subfolder level.
Hope that makes sense - I get the feeling to answer is no and the only way to enforce it is to use the multiple groups/explicit permissions at each level in the folder structure.  Happy to be told otherwise!!...

Similar Messages

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • Consuming web services in a jsr 168 portlet best practices.

    I am building portlets (jsr 168 api in Websphere Portal 6.0 using web service client of Rational). Now needed some suggestions on caching the web services data on the portlet. We have a number of portlets (somewhere around 4 or 5) on a portal page which basically rely on a single wsdl Lotus Domino Web Service.
    Is there a way I can cache the data returned by webservice so that I dont make repeated calls to the webservice on every portlet request. Any best practices/ideas on how I could do avoid multiple web service calls would be appreciated ?

    Interestingly, as it often happens with Oracle portal, this has started working without me doing anything special.
    However, the session events my listener gets notified of are (logically, as this portlet works via WSRP) different from user sessions. The problem I'm trying to solve now is that logging off (in SSO) doesn't lead to those sessions being destroyed. They only get destroyed after timeout specified in my web.xml (<session-config><session-timeout>30</session-timeout></session-config>). On the other hand, when they do expire, the SSO session may still be active, in which case the user gets presented with the infamous "could not get markup" error message. The latter is unacceptable in our case, so we had to set session-timeout to a pretty high value.
    So the question is, how can we track when the user logs off. We have found the portal.wwctx_sso_session$ and portal.WWLOG_ACTIVITY_LOG1$ (and ...2$) tables, but no documentation for them. However, the real problem with using those tables is that there's no way we could think of to match the portlet sessions with SSO sessions/actions listed in the tables. (Consider situation when someone logs in from two PCs.)
    Any ideas?

  • Advice on Best practice for inter-countries Active Directory

    We want to merge three active directories with on as parent in Dubai, then child in Dubai, Bahrain and Kuwait. The time zones are different and sites are connected using VPN/leased line. With my studies i have explored two options. One way is to have parent
    domain/forest in Dubai and Child domain in respective countries/offices; second way is to have parent and all child domains in Dubai Data center as it is bigger, while respective countries have DCs connected to their respective child domains in Dubai. (Personally
    i find it safer in second option)
    Kindly advise which approach comes under best practice.
    Thanks in advance.

    Hi Richard
    Mueller,
    You perfectly got my point. We have three difference forests/domain in three different countries. I asked this question becuase I am worried for problems in replications. 
    And yes there are political reasons due to which we want to have multiple domains under one single forest. I have these following points:
    1. With multiple domains you introduce complications with trusts 
    (Yes we will face complications that is why  I will have a VM where there will be three child domains for 3 countries in HQ sitting right next to my main AD server which have forest/domain -  which i hope will help in fixing replication problems)
    2. and
    accessing resources in remote domains. (To address this issue i will implement two additional DCs in respective countries to make the resources available, these RODCs will be pointed toward their respective main domains in HQ)
    As an example:- 
    HQ data center=============
    Company.com (forest/domain)
    3 child domain to company.com
    example uae.company.com
    =======================
    UAE regional office=====================
    2 RODCs pointed towards uae.company.com in HQ
    ==================================
    Please tell me if i make sense here.

  • Advice re best practice for managing the scan listener logs and list logs

    Hi friends,
    I've just started a job as a RAC dba administrator for some big 24*7 systems, I've never worked with clusterware and RAC.
    2 Space problems
    1) Very large listener_scan2.log in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/trace folder
    2) Heaps of log_nnn.xml files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert folder (4Gb used up)
    Welcome advice on the best way to manage these in the short term (i.e. delete manually) and the recommended practice and safest way (adri maybe not sure how it works with scan listeners)
    Welcome advice and commands that could be used to safely clean these up and put a robust mechanism in place for logfile management in RAC and CLusterware systems.
    Finally should I be checking the log files in /u01/11.2.0/grid/log/diag/tnslsnr/<server name>/listener_scan2/alert regulalrly ?
    My experience with listener logs is that they are only looked at when there are major connectivity issues and on the whole are ignored.
    Thanks for your help,
    Cheers, Rob

    Have you had any issues that require them for investigative purposes? If not, just remove them. Are the logs required for some sort of audit process? If yes, gzip them to a location where you can use your OS tape backup policies to retain them for n-days. Once you remove an active file, it should recreate the file and continue without interruption.

  • BFILE: need advice for best practice

    Hi,
    I'm planning to implement a document management system. These are my requirements:
    (0) Oracle 11gR2 on Windows 2008 server box
    (1) Document can be of type Word, Excel, PDF or plain text file
    (2) Document will get stored in DB as BFILE in a table
    (3) Documents will get stored in a directory structure: action/year/month, i.e. there will be many DB directory objects
    (4) User has read only access to files on DB server that result from BFILE
    (5) User must check out/check in document for updating content
    So my first problem is how to "upload" a user's file into the DB. My idea is:
    - there is a "transfer" directory where the user has read/write access
    - the client program copies the user's file into the transfer directory
    - the client program calls a PL/SQL-procedure to create a new entry in the BFILE table
    - this procedure will run with augmented rights
    - procedure may need to create a new DB directory (depending on action, year and/or month)
    - procedure must copy the file from transfer directory into correct directory (UTL_FILE?)
    - procedure must create new row in BFILE table
    Is this a practicable way? Is there anything that I could do better?
    Thanks in adavance for any hints,
    Stefan
    Edited by: Stefan Misch on 06.05.2012 18:42

    Stefan Misch wrote:
    yes, from a DBA point of view...Not really just from a DBA point of view. If you're a developer and you choose BFILE, and you don't have those BFILE's on the file system being backed up and they subsequently go "missing" i would say you (the developer) are at fault for not understanding the infrastructure you are working within.
    Stefan Misch wrote:
    But what about the posibility for the users to browse their files?. This would mean I had to duplicate the files: one copy that goes into the DB and is stored as BLOB and can be used to search. Another copy will get stored on the file system just to enable the user to browse their files (i.e. what files where created for action "offers" in february 2012. The filenames contain customer id and name as well as user id). In most cases there will be less that 100 files in any of those directories.
    This is why I thought a BFILE might be the best alternative as I get both: fast index search and browsing capability for users that are used to use windows explorer...Sounds like it would be simple enough to add some metadata about the files in a table. So a bunch of columns providing things like "action", "Date", "customer id", etc.... along with the document stored in a BLOB column.
    As for the users browsing the files, you'd need to build an application to interface with the database ... but i don't see how you're going to get away from building an application to interface with the database for this in any event.
    I personally wouldn't be a fan of providing users any sort of access to a production servers file system, but that could just be me.

  • Advice on best practices on book proof stages

    I'm wondering how others go about book projects when you move from one proof to the next, in the context of using InDesign's book panel to manage your project.
    I create a first proof in InDesign, send PDFs to contributors, and they approve or give me changes.
    This brings me to the second proof. Those PDFs are sent to an external proofreader.
    When the proofreader's comments come back, I incorporate them into the final proof, which goes to indexers.
    Because of the way things can get mucked up, I need to keep the first, second and final proofs separate, so that I can go back to figure out where things went wrong, if they did go wrong. So there is an individual directory for each stage; furthermore, so that it is easy to tell what I'm looking at, the file name of each chapter reflects the stage "1_StupidBook_proof1.indd" and so on.
    So what do people usually do in InDesign, in terms of the .indb file. Do you just rename all the files to "xxx_proof2" and edit the .indb file, removing the old pages and adding the new? Seems cumbersome. Rename the files and create a NEW .indb file?
    Just curious about how others do it. Appreciate any input.

    I have variables in the master page footers, one is the file name and one is the date (last changed, I think?).And one serious drawback with this method, obviously, is that I have to remove it from each chapter in the final proof.
    Ah -- that wasn't obvious to me. You can have any of 3 dates as text variable, the creation date, the modificaiton date, or the output date.
    One approach would be to use a custom text variable that you could syncrhronize across your book. That way you could specify "PROOF1" and then remove or change the variable across all chapters in a single operation.
    So your suggestion of the page information option under crop marks is a much better idea.
    I am not sure it is the way to go, though. If you print on oversize pages (e.g., if your book size is smaller than letter and you print on letter, or smaller than tabloid and you print on tabloid), it is easy enough to print at 100% and the Page Info will appear on the paper outside the area of your book.
    If you don't do that (most of us don't have that luxury), then you can consider moving the page marks inside the live region of the page. Doing this requires exploiting an undocumented feature of InDesign, custom crop marks, which requires hand-editing a custom file... See http://forums.adobe.com/message/3637984#3637984, etc. for directions. Some might consider this too much of a pain...

  • SCOM 2012 Sp1 Cu4 Web Console issue logging

    Hi I am having an Issue with the SCOM 2012 sp1 Cu4 Web console,
    When logon to the web console its Says Singing IN,
    And the loops on Initializing....
    after a while I get this error,
    Please provide the following information to the support engineer if you have to contact Microsoft Help and Support :
    System.TimeoutException: [HttpRequestTimedOutWithoutDetail]
    Arguments:
    http://localhost/OperationsManager/Services/DataAccessService.svc
    Debugging resource strings are unavailable. Often the key and arguments provide sufficient information to diagnose the problem.
    See
    http://go.microsoft.com/fwlink/?linkid=106663&Version=5.1.20913.00&File=System.ServiceModel.dll&Key=HttpRequestTimedOutWithoutDetail ---> System.Net.WebException ---> System.Net.WebException
       at System.Net.Browser.BrowserHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult)
       at System.Net.Browser.BrowserHttpWebRequest.<>c__DisplayClassa.<EndGetResponse>b__9(Object sendState)
       at System.Net.Browser.AsyncHelper.<>c__DisplayClass4.<BeginOnUI>b__0(Object sendState)
       --- End of inner exception stack trace ---
       at System.Net.Browser.AsyncHelper.BeginOnUI(SendOrPostCallback beginMethod, Object state)
       at System.Net.Browser.BrowserHttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
       at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelAsyncRequest.CompleteGetResponse(IAsyncResult result)
       --- End of inner exception stack trace ---
       at System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result)
       at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result)
       at System.ServiceModel.ClientBase`1.ChannelBase`1.EndInvoke(String methodName, Object[] args, IAsyncResult result)
       at Microsoft.EnterpriseManagement.Presentation.DataAccess.Proxy.ServiceReference.DataAccessServiceClient.DataAccessServiceClientChannel.EndExecute(IAsyncResult result)
       at Microsoft.EnterpriseManagement.Presentation.DataAccess.Proxy.ServiceReference.DataAccessServiceClient.Microsoft.EnterpriseManagement.Presentation.DataAccess.Proxy.ServiceReference.IDataAccessService.EndExecute(IAsyncResult result)
       at Microsoft.EnterpriseManagement.Presentation.DataAccess.Proxy.ServiceReference.DataAccessServiceClient.OnEndExecute(IAsyncResult result)
       at System.ServiceModel.ClientBase`1.OnAsyncCallCompleted(IAsyncResult result)
    I found that the only way to fix it previously was to completely remove IIS from the server and reinstall everything connecting to the Web Console, so far its not the best way to go as I keep getting this error after 2 days,
    Can anyone tell me how I would even go about diagnosing this?
    Kind Regards

    Got the same issue again with Update Release 2 for R2,
    then found this blog, but it did not help me, only guided me in a direction
    http://thoughtsonopsmgr.blogspot.com/2013/12/quick-fix-register-aspnet-40-with-iis.html
    What help me was to re-add the svc-integrated-4.0 by copying and pasting the info it, same with the ScriptModule-4.0.
    Don't know why that worked for me but this is a funny issue,
    I also don't recommend doing this, as it only worked for me.

  • Exchange Best Practices Analyzer and Event 10009 - DCOM

    We have two Exchange 2010 SP3 RU7 servers on Windows 2008 R2
    In general, they seem to function correctly.
    ExBPA (Best Practices Analyzer) results are fine. Just some entries about drivers being more than two years old (vendor has not supplied newer drivers so we use what we have). Anything else has been verified to be something that can "safely be ignored".
    Test-ServiceHealth, Test-ReplicationHealth and other tests indicate no problems.
    However, when I run the ExBPA, it seems like the server on which I run ExBPA attempts to contact the other using DCOM and this fails.
    Some notes:
    1. Windows Firewall is disabled on both.
    2. Pings in both directions are successful.
    3. DTCPing would not even run so I was not able to test with this.
    4. Connectivity works perfectly otherwise. I can see/manage either server from the other using the EMC or EMS. DAG works fine as far as I can see.
    What's the error message?
    Event 10009, DistributedCOM
    "DCOM was unable to communiate with the computer --- opposite Exchange server of the pair of Exchange servers---  using any of the configured protocols."
    This is in the System Log.
    This happens on both servers and only when I run the ExBPA.
    I understand that ExBPA uses DCOM but cannot see what would be blocking communications.
    I can access the opposite server in MS Management Consoles (MMC).
    Note: the error is NOT in the ExBPA results - but rather in the Event Viewer System Log.
    Yes, it is consistent. Have noticed it for some time now.
    Does anyone have any idea what could be causing this? Since normal Exchange operations are not affected, I'm tempted to ignore it, but I have to do my "due diligence" and inquire. 
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

    Hi David,
    I recommend you refer the following article to troubleshoot this event:
    How to troubleshoot DCOM 10009 error logged in system event
    Why this happens:
    Totally speaking, the reason why DCOM 10009 is logged is that: local RPCSS service can’t reach the remote RPCSS service of remote target server. There are many possibilities which can cause this issue.
    Scenario 1:
     The remote target server happens to be offline for a short time, for example, just for maintenance.
    Scenario 2:
    Both servers are online. However, there RPC communication issue exists between these two servers, for example:  server name resolvation failure, port resources for RPC communication exhaustion, firewall configuration.
    Scenario 3:
    Even though the TCP connection to remote server has no any problem, but if the communication of RPC authentication gets problem, we may get the error status code like 0x80070721 which means “A security package specific
    error occurred” during the communication of RPC authentication, DCOM 10009 will also be logged on the client side.
    Scenario 4:
    The target DCOM |COM+ service failed to be activated due to permission issue. Under this kind of situation, DCOM 10027 will be logged on the server side at the same time.
    Event ID 10009 — COM Remote Service Availability
    Resolve
    Ensure that the remote computer is available
    There is a problem accessing the COM Service on a remote computer. To resolve this problem:
    Ensure that the remote computer is online.
    This problem may be the result of a firewall blocking the connection. For security, COM+ network access is not enabled by default. Check the system to determine whether the firewall is blocking the remote connection.
    Other reasons for the problem might be found in the Extended Remote Procedure Call (RPC) Error information that is available in Event Viewer.
    To perform these procedures, you must have membership in Administrators, or you must have been delegated the appropriate authority.
    Ensure that the remote computer is online
    To verify that the remote computer is online and the computers are communicating over the network:
    Open an elevated Command Prompt window. Click Start, point to
    All Programs, click Accessories, right-click
    Command Prompt, and then click Run as administrator. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    At the command prompt, type ping, followed by a space and the remote computer name, and then press ENTER. For example, to check that your server can communicate over the network with a computer named ContosoWS2008, type
    ping ContosoWS2008, and then press ENTER.
    A successful connection results in a set of replies from the other computer and a set of
    ping statistics.
    Check the firewall settings and enable the firewall exception rule
    To check the firewall settings and enable the firewall exception rule:
    Click Start, and then click Run.
    Type wf.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    In the console tree, click Inbound rules.
    In the list of firewall exception rules, look for COM+ Network Access (DCOM In).
    If the firewall exception rule is not enabled, in the details pane click
    Enable rule, and then scroll horizontally to confirm that the protocol is
    TCP and the LocalPort is 135. Close Windows Firewall with Advanced Security.
    Review available Extended RPC Error information for this event in Event Viewer
    To review available Extended RPC Error information for this event in Event Viewer:
    Click Start, and then click Run.
    Type comexp.msc, and then click OK. If the
    User Account Control dialog box appears, confirm that the action it displays is what you want, and then click
    Continue.
    Under Console Root, expand Event Viewer (Local).
    In the details pane, look for your event in the Summary of Administrative Events, and then double-click the event to open it.
    The Extended RPC Error information that is available for this event is located on the
    Details tab. Expand the available items on the Details tab to review all available information. 
    For more information about Extended RPC Error information and how to interpret it, see Obtaining Extended RPC Error Information (http://go.microsoft.com/fwlink/?LinkId=105593).
    Best regards,
    Niko Cheng
    TechNet Community Support

  • Best Practice for CTS_Project use in a Non-ChARM ECC6.0 System

    We are on ECC6.0 and do not leverage Solution Manager to any extent.  Over the years we have performed multiple technical upgrades but in many ways we are running our ECC6.0 solution using the same tools and approaches as we did back in R/3 3.1. 
    The future vision for us is to utilize CHARM to manage our ITIL-centric change process but we have to walk before we can run and are not yet ready to make that leap.  Currently we are just beginning to leverage CTS_Projects in ECC as a grouping tool for transports but are still heavily tied to Excel-based "implementation plans".  We would appreciate references or advice on best practices to follow with respect to the creation and use of the CTS_Projects in ECC.
    Some specific questions: 
    #1 Is there merit in creating new CTS Projects for support activities each year?  For example, we classify our support system changes as "Normal", "Emergency", and "Standard".  These correspond to changes deployed on a periodic schedule, priority one changes deployed as soon as they are ready, and changes that are deemed to be "pre-approved" as they are low risk. Is there a benefit to create a new CTS_Project each year e.g. "2012 Emergencies", "2013 Emergencies" etc. or should we just create a CTS_Project "Emergencies" which stays open forever and then use the export time stamp as a selection criteria when we want to see what was moved in which year?
    #2 We experienced significant system performance issues on export when we left the project intersections check on.  There are many OSS notes about performance of this tool but in the end we opted to turn off this check.  Does anyone use this functionality?  Any reocmmendations?
    Any other advice would be greatly appreciated.

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Search for ABAP Webdynpro Best practice or/and Evaluation grid

    Hi Gurus,
    Managers or Team Leaders are facing of the development of SAP application on the web. Functional people propose to business people Web applications.  I'm searching for Best practice for Web Dynpro ABAP Development. We use SAP Netweaver 7.0 and an SAP ECC 6.0 SP4.
    We are facing of claims about Webdynpro response time. The business wants to have 3 sec response time and we have 20 or  25 sec.
    I want to communicate to functional people a kind of recommendation document explaining that in certain case the usage of Webdynpro will not be a benefit for the business.
    I know that the transfer of data, the complexity of the screen and also the hardware are one of the keys but I expect some advices from the SDN community.
    Thanks for your answers.
    Rgds,
    Christophe

    Hi,
    25s is a lot. I wouldn't like to use an application with response time that big. Anyway, Thomas Jung has just recently published a series of video blogs about WDA performance tools. It may help you analyzing why your web dynpro application is so slow. Here is the link to the [first part|http://enterprisegeeks.com/blog/2010/03/03/abap-freakshow-u2013-march-3-2010-wda-performance-tools-part-1/]. There is also a [dedicated forum|Web Dynpro ABAP; to WDA here on SDN. I would search there for some tips and tricks.
    Cheers

  • What are Resource Bundle Best Practices techniques for Enterprise App?

    Regarding JDeveloper: 11.1.1.6.0, Studio Edition
    I was wondering if someone could provide advice on Best Practices for managing Resource Bundles for an international Enterprise Application.
    I have been reading textbooks and throughout the web, and I can find different options available. And I can find cautionary tales to get it right at the beginning of Development, but I cannot find Best Practices suggestions.
    For instance:
    - Should I use XLIFF Resource Bundle, Properties Bundle, or List Resource Bundle?
    - What are the benefits and disadvantages of storing the Key/Value pairs in the database?
    - It seems that storing in the db would make maintenance easier, because applications do not need to be redeployed, but would they be slower?
    - One textbook indicates that "One Bundle per Project" is preferred for ViewController Project, and "One Bundle Per File" is preferred for Model Project. However, I cannot help but think if the whole Enterprise used just one Resource Bundle, it would save typing cust_id/Customer Number in 10 different Bundles.
    - One text indicates how to maintain translated versions of Access Keys, if the Bundle is a Properties Bundle, but provides no assistance for other Resource Bundles.
    Advice regarding Best Practices would be quite helpful.
    Sincerely,
    Arie

    Anyone?

  • Best Practices - reports

    I am looking for advice on best practices for reporting. Test runs generate alot of data and I am not always sure what the best things to report on are.
    In particular I am comparing a number of different code releases over the last few weeks, is page response time the only thing, or most important thing to report on?

    Sorry for the not very detailed reply. You were on the right track though.
    The page response times are an important metric as they, along with error rates, indicated the end user experience. Therefore, I would recommend that any comparison of test runs begin with an analysis of the average response times under load and the transaction failure rate under load. Using e-Load, you could build these graphs as response times/transactions failed and virtual users on a time axis, or use Users as the scale.
    But, in my experience the end user experience is only the beginning of the story, with the impact on the underlying infrastructure telling the other side. As you have already worked out, if the response times at a given level of users are the same the first instinct is to say the new code is as successful. But the CPUs of the application servers might now be at 80% utilization where before they were 50%. So while the end user would see no difference your overall capacity will have dropped. The same could go for other metrics like bandwidth if more or larger images were added to the application, etc.
    System level metrics then like KB/sec, hits per second and pages per second should also be included in the comparison. And these are already available in e-Load. Serverstats can collect the rest of the data for you in most cases.
    At minimum, I would look at CPU on the Web Servers. CPU, memory, requests per second and requests queued on the application servers. And CPU, connections and disk usage on databases. This will be a good start .
    CMason
    Senior Consultant - eLoadExpert
    Empirix

  • Best practice for GSS design

    Please advice as to what records needs to go in Public DNS server in a scenario where i have url say x.y.com which is listed in the Domain List of the GSS-P, sot that GSS-P or GSS-S can handout the respective external VIP to the clients requesting the url in case one of the GSS/site (GSS_P and GSS-S) goes unavailable
    Please also specify the communication path of a client accessing x.y.com.
    Advice the best practice
    Thanks in advance
    ~EM

    Hi,
    I am new to GSS. I would appreciate if some can help me with the deisgn. I want to know if I need to put the GSS inline after the inernet facing firewall and befor the ACE module. OR use it as one arm mode. Trying to figure out the best fit in the design.
    FWSM1 >>> GSS >>> ACE
    or
    just put the GSS as one arm mode between the FWSM1 >>> ACE
                                                                                                         |
                                                                                                    GSS
    Thanks in advance,
    Nav

  • Best practices for adding CLICK listeners to complicated menus?

    OK, I’m gonna wear out my welcome but here’s my last question of the day:
    I’ve got a project that is essentially a large collection of menus, some buttons common across multiple screens, others unique. The following link is the work in progress, most of the complexity is in the “Star Action Items and Forms” area (btw: the audio in the launch presentation is just a placeholder track, I know we can't use it):
    http://www.appliedcd.com/Be-A-star/Be-A-star.html
    To deal with the large number buttons my timeline simply has the following for every menu frame:
    stop();
    initFrame();
    The initFrame() function then has a list of frames and activates the buttons appearing on each screen, a very simplified example follows. In this example commonButtons span all 3 menus, semiCommonButtons span menu 2 and 3, button1A, button2A, etc… are unique per menu.:
    function initFrame():void {
         var myFrame:String = this.currentLabel;
         commonButton1.addEventListener(MouseEvent.CLICK,onInternalLink);
         commonButton2.addEventListener(MouseEvent.CLICK,onInternalLink);
         commonButton3.addEventListener(MouseEvent.CLICK,onInternalLink);
         switch(myFrame) {
              case "menu1":
                   button1A.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button1B.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button1C.addEventListener(MouseEvent.CLICK,onInternalLink);
              break;
              case "menu2":
                   semiCommonButton1.addEventListener(MouseEvent.CLICK,onInternalLink);
                   semiCommonButton2.addEventListener(MouseEvent.CLICK,onInternalLink);
                   semiCommonButton3.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button2A.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button2B.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button2C.addEventListener(MouseEvent.CLICK,onInternalLink);
              break;
              case "menu3":
                   button3A.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button3B.addEventListener(MouseEvent.CLICK,onInternalLink);
                   button3C.addEventListener(MouseEvent.CLICK,onInternalLink);
              break;
    The way the project was designed, I “thought” menu3 would only be accessible through menu2, thus guaranteeing that the semiCommonButtons would get initialized, but I forgot the functionality of my back button could jump the user directly from menu1 to menu3. The solution is simple, initialize every button on every navigation  target, however, is this really the best way to initialize a bunch of buttons? Another possible approach would be to have an array of button instance names and a function that said: if instance XYZ exists, add listener, then simply loop through the array on every nav target. Anyone with more experience have advice on best practices in this situation?

    Hmmmm just run a test on this whereby I added the above snippet to my master page. I then publish a major version. I can see that every (Welcome) custom page layout has this data widget working, providing I add the div to the page..  
    I wonder if the reason I can't add the snippet  directly  to an individual custom layout page is a bug or am I doing something incorrectly?
    Daniel

Maybe you are looking for