Architecture/Design Question with best practices ?

Architecture/Design Question with best practices ?
Should I have separate webserver, weblogic for application and for IAM ?
If yes than how this both will communicate, for example should I have webgate at both the server which will communicate each other?
Any reference which help in deciding how to design and if I have separate weblogic one for application and one for IAM than how session management will occur etc
How is general design happens in IAM Project ?
Help Appreciated.

The standard answer: it depends!
From a technical point of view, it sounds better to use the same "midleware infrastructure", BUT then the challenge is to find the lastest weblogic version that is certified by both the IAM applications and the enterprise applications. This will pull down the version of weblogic, since the IAM application stack is certified with older version of weblogic.
From a security point of view (access, availability): do you have the same security policy for the enterprise applications and the IAM applications (component of your security architecture)?
From a organisation point of view: who is the owner of weblogic, enterprise applications and IAM applications. In one of my customer, application and infrastructure/security are in to different departments. Having a common weblogic domain didn't feet in the organization.
My short answer would be: keep it separated, this will save you a lot of technical and political challenges.
Didier.

Similar Messages

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • List of activities for setting up ERP 6.0 with Best Practices

    Based on my understand if i were to plan for setting up an ERP 6.0 Landscape with Best Practices (Full Scope) i would consider the execution of following activities:
    Prepare EHP4 Landscape
    Install ERP 6.0 on DEV
    Upgrade DEV to EHP4 SP06
    Install SAP Best Practices v1.604
    Activate Full Scope of Best Practices on DEV
    Prepare QA (System Copy - DEV with BP Activated)
    Prepare PRD (System Copy - DEV with BP Activated)
    Register Landscape in Solution Manager
    Customization on EHP4
    Customize Best Practices Scenarios on DEV
    Transport Changes to QA
    Test Changes  on QA
    Transport Changes to PRD
    Upgrade to EHP5
    Upgrade DEV, QA and PRD to EHP5
    Install HCM Localization on DEV, QA and PRD
    Customization on EHP5
    Customize HCM Best Practices Scenarios on DEV
    Transport Changes to QA
    Test Changes  on QA
    Transport Changes to PRD
    Please advise if there is anything missing or incorrect.
    Thanks.

    Hi,
            I'm on a project with similar requirements. I follow this order in steps that you describe:
    Install ERP 6.0 on DEV
    Upgrade DEV to EHP4 SP06
    Install SAP Best Practices v1.604 on DEV
    Install QA
    Install SAP Best Practices v1.604 on QUA
    Install PRD
    Install SAP Best Practices v1.604 on PRD
    Activate Full Scope of Best Practices on DEV
    Register Landscape in Solution Manager
    Upgrade DEV, QA and PRD to EHP5
    Install HCM Localization on DEV, QA and PRD
    Customize Best Practices Scenarios on DEV
    Transport Changes to QA
    Test Changes  on QA
    Transport Changes to PRD
    I hope that this will be useful for you
    Best regards.
    Alejandro Cepeda.

  • CRM  - how to work with Best Practices

    Hi All,
    we will start with the implementation of mySAP CRM during the next weeks.
    I'm a bit confused - how should I work with Best Practices, and what are the differences between the 3 ways for Best Practices
    1) we have a 'Solution Manager' System where I can use Best Practices for CRM
    2) Best Practices on help.sap.com: http://help.sap.com/bp_crmv250/CRM_DE/html/index_DE.htm (Buliding Blocks!)
    3) Best Practices DVD to install on the CRM System
    Are the 3 ways exchangeable? Is there some information provided by SAP?
    We have already installed the Best Practices DVD, but now I don't know how to use this Add-On: Is there a special transaction-code to use them or a extension for the IMG?
    regards
    Stefan

    Hi Stefan Kübler,
    If the solution manager is in place, then the suggested (also the best) method is to use the best practices on it.
    If you want to install and use the best practices with CRM system then the procedure is given in the best practice CD/DVD. Also you can download the installation procedure from the below link : http://help.sap.com/bp_crmv340/CRM_DE/index.htm. Click on ‘Installation’ on left and then ‘Quick Guide’ on right. Download the document.
    Though the best practices give you a way to start with, but they can’t replace your requirement. You have to configure the system as per your exact business requirement.
    I never installed best practices before, but extensively used them as reference in all my projects.
    Follow the below thread for additional information on  best practices :
    Also refer to my past thread :
    Do not forget to reward if helps.
    Regards,
    Paul Kondaveeti

  • New to ColdFusion - Question regarding best practice

    Hello there.
    I have been programming in Java/C#/PHP for the past two years or so, and as of late have really taken a liking to ColdFusion.
    The question that I have is around the actual seperation of code and if there are any best practices that are preached using this language. While I was learning Java, I was taught that it's best to have several layers in your code; example: Front end (JSPs or ASP) -> Business Objects -> Daos -> Database. All of the code that I have written using these three languages have followed this simple structure, for the most part.
    As I dive deeper into ColdFusion, most of the examples that I have seen from vetrans of this language don't really incorporate much seperation. And I'm not referring to the simple "here's what this function does" type of examples online where most of the code is written in one file. I've been able to see projects that have been created with this language.
    I work with a couple developers who have been writing in ColdFusion for a few years and posed this question to them as well. Their response was something to the affect of, "I'm not sure if there are any best practices for this, but it doesn't really seem like there's much of an issue making calls like this".
    I have searched online for any type of best practices or discussions around this and haven't seen much of anything.
    I do still consider myself somewhat of a noobling when it comes to programming, but matters of best practice are important to me for any language that I learn more about.
    Thanks for the help.

    Frameworks for Web Applications can require alot of overhead, more than you might normally need programming ColdFusion, I have worked with Frameworks, including Fusebox, what I discovered is when handing a project over to a different developer, it took them over a month before they were able to fully understand the Fusebox framework and then program it comfortably. I decided to not use Fusebox on other projects for this reason.
    For maintainability sometimes its better to not use a framework, while there are a number of ColdFusion developers, those that know the Fusebox framework are in the minority. When using a framework, you always have to consider the amount of time to learn it and succesfuly implement it. Alot of it depends on how much of your code you want to reuse. One thing you have to consider, is if you need to make a change to the web application, how many files will you have to modify? Sometimes its more files with a framework than if you just write code without a framework.
    While working on a website for Electronic Component sourcing, I encountered this dynamic several times.
    Michael G. Workman
    [email protected]
    http://www.usbid.com
    http://ic.locate-ic.com

  • BI Technical Design Review Criteria/Best Practice Assessments

    Dear Experts,
    I am currently involved in conducting a pre-build BITechnical Design Review i.e. Data Model structure/Extractor/Transformation Logic/Data Flow Diagrams.
    Are there any tangible criteria/review template/methods out there to ensure all components are included in a BI design and that they conform to the SAP Best Practices?
    Thanks,
    Jony

    Hi jonathan,
    The BW Project guidelines can be as follows ,
    Stages in BW project
    1 Project Preparation / Requirement Gathering
    2 Business Blueprint
    3 Realization
    4 Final Preparation
    5 GO Live & Support
    Project Preparation / Requirement Gathering
    Collect requirement thru interviews with Business teams /Core users / Information Leaders .
    Study & analyze KPI 's (key figures) of Business process .
    Identify the measurement criteria's (Characteristics).
    Understand the Drill down requirements if any.
    Understand the Business process data flow if any .
    Identify the needs for data staging layers in BW – (i.e need for ODS if any)
    Understand the system landscape .
    Prepare Final Requirements Documents in the form of Functional Specifications containing :
    Report Owners,
    Data flow ,
    KPI’s ,
    measurement criteria’s,
    Report format along with drilldown requirements .
    2 Business Blueprint
    Check Business content against the requirements
    Check for appropriate
    Info Objects - Key figures & Characters
    Check for Info cubes / ODS
    Check for data sources & identify fields in source system
    Identify Master data
    document all the information in a file – follow standard templates
    Prepare final solution
    Identify differences (Gaps) between Business Content & Functional
    specification. propose new solutions/Developments & changes if required at different levels such as Info Objects ,Info cube , Data source etc . Document the gaps & respective solutions proposed– follow standard templates
    Design & Documentation
    Design the ERD & MDM diagrams for each cube & related objects
    Design the primary keys/data fields for intermediate Storage in ODS
    Design the Data flow charts right from data source up to Cube .
    Consider the performance parameters while designing data models
    Prepare High level / Low level design documents for each data model.--- follow standard templates
    Identify the Roles & Authorizations required and Document it – follow standard templates
    final review of design with core BW users .
    Sign off the BBP documents
    3 Realization
    Check & Apply Latest Patches/Packages ...in BW & R/3 systems.
    Activate/Build & enhance the cubes/ODS as per data model designs...maintain the version documents .
    Identify & activate Info objects / Master data info sources / attributes ,prepare update rules
    Assign data sources .prepare transfer rules , prepare multi providers . prepare Info packages .
    perform the unit testing for data loads….both for master data & transaction data .
    develop & test the end user queries .
    Design the process chains ,schedule & test
    create authorizations / Roles …assign to users ..and test
    Apply necessary patches & Notes if any .
    freeze & release the final objects to quality systems
    perform quality tests .
    Re design if required . (document changes, maintain versions)
    4 Final Preparation
    Prepare the final check list of objects to be released .identify the dependencies & sequence of release
    perform Go Live checks as recommended by SAP in production system
    keep up to date Patch Levels in Production system
    Test for production scenarios in a pre-production system which is a replica of production system .
    Do not Encourage the changes at this stage .
    freeze the objects .
    5 GO Live & Support
    keep up to date Patch Levels
    Release the objects to production system
    Run the set ups in R/3 source system & Initialize Loads in BW
    Schedule Batch jobs in R/3 system (Delta loads)
    schedule the process chains in BW .
    Performance tuning – on going activity
    Enhancements - if any
    You can get some detailed information in the following link.
    http://sap.ittoolbox.com/documents/document.asp?i=3581
    Try to go to ASAP implementation roadmap.
    https://websmp103.sap-ag.de/~form/sapnet?_SHORTKEY=01100035870000420636&_SCENARIO=01100035870000000202
    Check the links below that gives you brief overview of the above steps .
    https://websmp201.sap-ag.de/asap
    http://www.geocities.com/santosh_karkhanis/ASAP/
    ASAP
    https://websmp201.sap-ag.de/asap
    http://www.geocities.com/santosh_karkhanis/ASAP/
    https://service.sap.com/roadmaps
    https://websmp104.sap-ag.de/bi
    ***Please reward if useful.**
    Blue Print:
    http://www.sap.com/services/servsuptech/bestpractices/index.epx --- look for blueprint
    http://iris.tennessee.edu/Blueprint/BW/BW-Blue%20Print-Final.doc
    http://help.sap.com/bp_biv335/BI_EN/html/Business_Blueprint.htm
    You can get some detailed information in the following link.
    http://sap.ittoolbox.com/documents/document.asp?i=3581
    also please chck out
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/2e8e5288-0b01-0010-2ea8-bcd4df5084a7
    a hwo to on BI7.0 upgrade .. also as suggested check out the BW upgrade roadmap on the support portal..
    Hope it helps..
    CSM Reddy
    Assign points if helpful
    Message was edited by:
            CSM REDDY

  • Question about Best Practices - Redwood Landscape/Object Naming Conventions

    Having reviewed documentation and posts, I find that there is not that much information available in regards to best practices for the Redwood Scheduler in a SAP environment. We are running the free version.
    1) The job scheduling for SAP reference book (SAP Press) recommends multiple Redwood installations and using export/import to move jobs and other redwood objects from say DEV->QAS->PROD. Presentations from the help.sap.com Web Site show the Redwood Scheduler linked to Solution Manager and handling job submissions for DEV-QAS-PROD. Point and Shoot (just be careful where you aim!) functionality is described as an advantage for the product. There is a SAP note (#895253) on making Redwood highly available. I am open to comments inputs and suggestions on this issue based on SAP client experiences.
    2) Related to 1), I have not seen much documentation on Redwood object naming conventions. I am interested in hearing how SAP clients have dealt with Redwood object naming (i.e. applications, job streams, scripts, events, locks). To date, I have seen in a presentation where customer objects are named starting with Z_. I like to include the object type in the name (e.g. EVT - Event, CHN - Job Chain, SCR - Script, LCK - Lock) keeping in mind the character length limitation of 30 characters. I also have an associated issue with Event naming given that we have 4 environments (DEV, QA, Staging, PROD). Assuming that we are not about to have one installation per environment, then we need to include the environment in the event name. The downside here is that we lose transportability for the job stream. We need to modify the job chain to wait for a different event name when running in a different environment. Comments?

    Hi Paul,
    As suggested in book u2018job scheduling for SAP from SAPu2019 press it is better to have multiple instances of Cronacle version (at least 2 u2013 one for development & quality and other separate one for production. This will have no confusion).
    Regarding transporting / replicating of the object definitions - it is really easy to import and export the objects like Events, Job Chain, Script, Locks etc. Also it is very easy and less time consuming to create a fresh in each system. Only complicated job chains creation can be time consuming.
    In normal cases the testing for background jobs mostly happens only in SAP quality instance and then the final scheduling in production. So it is very much possible to just export the verified script / job chain form Cronacle quality instance and import the same in Cronacle production instance (use of Cronacle shell is really recommended for fast processing)
    Regarding OSS note 895253 u2013 yes it is highly recommended to keep your central repository, processing server and licencing information on highly available clustered environment. This is very much required as Redwood Cronacle acts as central job scheduler in your SAP landscape (with OEM version).
    As you have confirmed, you are using OEM and hence you have only one process server.
    Regarding the conventions for names, it is recommended to create a centrally accessible naming convention document and then follow it. For example in my company we are using the naming convention for the jobs as Z_AAU_MM_ZCHGSTA2_AU01_LSV where A is for APAC region, AU is for Australia (country), MM is for Materials management and then ZCHGSTA2_AU01_LSV is the free text as provided by batch job requester.
    For other Redwood Cronacle specific objects also you can derive naming conventions based on SAP instances like if you want all the related scripts / job chains to be stored in one application, its name can be APPL_<logical name of the instance>.
    So in a nutshell, it is highly recommend
    Also the integration of SAP solution manager with redwood is to receive monitoring and alerting data and to pass the Redwood Cronacle information to SAP SOL MAN to create single point of control. You can find information on the purpose of XAL and XMW interfaces in Cronacle help (F1). 
    Hope this answers your queries. Please write if you need some more information / help in this regard.
    Best regards,
    Vithal

  • Driver Architecture design question

    Hi,
    I want to move from a system design which uses an intermediate user space process to co-ordinate requests going to a device to one where the applications communicate directly with the driver.
    This will mean handling queues of requests in the driver and ideally multiple user threads calling into the driver submitting requests and (probably) waiting for the responses.
    My question is...what is the best architecture to use for such a system. It does not properly map to the traditional read/write style of driver I/O and I was thinking I could use the ioctl interface to submit a structure that contained the request and space for the response. In the ioctl call the request could be added to a queue going to the device and the call could block waiting for receipt of the response.
    I'm not sure that this is the best way to go about this (Is ioctl really meant for this kind of operation and will it scale?) and I'd be very grateful for any suggestions.
    Thanks in advance,
    Diarmuid

    Yes, ioctls are commonly used for this sort of thing.

  • Architecture Design Question: Integrating AMF and HTTP/REST

    We have an app that is consuming services from blazeds over an AMF channel. This approach replaced an earlier implementation that consumed SOAP services. This took place before I inherited the project. Apparently, there were tremendous performance gains in switching to AMF and we don't want to abandon it. 
    Now we are creating a new RESTful, HTTP Request/Response-based service layer that will be shared across several organizations and the idea is that anyone can then write clients to grab our data (as well as data from other repositories in other organizations that implement the common service API). The services include output handlers that are designed to return data in various formats that the user might request (e.g., csv, xml, JSON, AMF???).
    My question is about how to keep the performance benefits of AMF for our Flex client as the new services move to the HTTP/REST architecture.
    Our current thinking is to add the blaze jars to the new webapp and use the message broker to route as you normally would, but the destination would essentially be an adapter class that can take the AMF requests and pass them on to the RESTful access points of our services, and then transform the response back to AMF.
    I just started reading Shashank Tiwari's Professional BlazeDS and came across the chapter on using blaze as a server-side proxy. Is this a viable approach for what I am trying to do? I also see references to extending blaze by creating custom adapters. Is this the right track? I'm sure this is a common problem. I'm looking for a discussion on possible solutions. Andy ideas?

    Hi,
    In Lync server 2013 Stretched pools are not supported for the Front End, Edge, Mediation, and Director server roles. It need two Lync pools.
    If one pool fail to connect, An administrator can declare an emergency and fail over the pool to the backup pool.  That is done by using the:
    Invoke-CsPoolFailover –PoolFQDN <Pool fqdn> –DisasterMode –Verbose
    More details:
    http://blog.avtex.com/2012/07/26/understanding-lync-2013-server-failover/
    Note: Microsoft is providing this information as a convenience to you. The sites are not controlled by Microsoft. Microsoft cannot make any representations regarding the quality, safety, or suitability of any software or information
    found there. Please make sure that you completely understand the risk before retrieving any suggestions from the above link.
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • Question on best practice to extend schema

    We have a requirement to extend the directory schema. I wanted to know what is the standard practice adopted
    1) Is it good practice to manually create an LDIF so that this can be run on every deployment machine at every stage?
    2) Or should the schema be created through the console the first time and the LDIF file from this machine copied over to the schema directory of the target server ?
    3) Should the custom schema be appended to the 99user.ldif file or is it better to keep it in a separate LDIF ?
    Any info would be helpful.
    Thanks
    Mamta

    I would say it's best to create your own schema file. Call it 60yourname.ldif and place it in the schema directory. This makes it easy to keep track of your schema in a change control system (e.g. CVS). The only problem with this is that schema replication will not work - you have to manually copy the file to every server instance.
    If you create the schema through the console, schema replication will occur - schema replication only happens when schema is added over LDAP. The schema is written to the 99user.ldif file. If you choose this method, make sure you save a copy of the schema you create in your change control system so you won't lose it.

  • XI Configuration Design questions with multi-mapping message mapping object

    Hello,
    I'm having trouble designing a particular scenario for multi-mapping.  Currently i'm working with a Vendor create and change.  BPM is not being used.
    This is what i need:
    I need a CREMDM04 to turn into one or multiple ADRMAS/CREMAS IDocs and potentially a CLFMAS IDoc based on the values in the inbound CREMDM04 IDoc.
    This is what i currently have:
    A CREMDM04 inbound idoc is multi-mapped to a CREMDM03 (1...9999), another CREMDM03 (0...9999), and a CLFMAS01 (0...9999).  At a minimum only the first CREMDM03 IDoc will be created and at a maximum all three will be created.  The parameters on creating the second CREMDM03 IDoc and the CLFMAS01 IDoc are based on the values in the inbound CREMDM04 IDoc, whereas the first CREMDM03 IDoc will always be created and the values will just be converted/mapped from the inbound CREMDM04 IDoc.  This multi-mapping is currently set-up via a graphical message map and works successfully in the test-tab of the mapping object.  It has a main message and has sub-messages which are the IDocs.  I’m mapping the CREMDM04 to a CREMDM03 to then map it through an ABAP-Class and then to an XSL where the CREMDM03 inbound structure is expected to split into ADRMAS and CREMAS Outbound IDocs for Vendor Create/Change in the remote R/3 systems.
    After the graphical map we have a necessary ABAP Class call that calls a BAPI to the remote system.  This ABAP Class must come after the graphical map since the parameter for the BAPI is based on a converted value from the graphical multi-map.
    After the ABAP Class call there is finally an XSL message split the CREMDM IDoc into an ADRMAS and CREMAS IDoc.  There need to be two interface mappings (one per ADRMAS and CREMAS) since the ABAP classes and XSLs are specific to the ADRMAS and CREMAS.
    The CLFMAS IDoc can go directly to the remote system, but since it’s within this one multi-map, I’m not sure if is possible?  I’m not sure if it will fail once it tries entering the XSL mapping (this is the standard CREMDM message split offered from SAP).
    There are three interface mapping scenarios I can think of, but cannot get to work:
    CREMDM04 to ADRMAS02
    CREMDM04 to CREMAS03
    CREMDM04 to CLFMAS01
    Currently I have the Interface Mapping structured as follows:  (I cannot get this to activate as it appears it does not work)
    Multi-Mapping ==> ABAP Class Call ==> Standard XSL Message Split
    How should i design the interface mapping objects and the configuration objects for this scenario?
    Any help is appreciated and I definitely will reward points (no need to include it in your response).

    Hi,
    I suggest you may use multiple steps interface mapping. It's composited with 3 message mappings as step by step.
    Mapping 1: One to one mapping. For the output schema, use a composition schema which includes those 3 IDOCs you want.
    Mappign 2: ABAP Mapping. I am not sure the ABAP class you mentioned is an ABAP mapping or not. If it does, That's ok. If not,
    call that ABAP class in your ABAP mapping and do corresponding change for your message. Return back the same structure as output.
    Mapping 3: One to multiple mapping to split the message.
    So basically as interface mapping, it's one to multiple mapping. And internally, you have 3 steps to realize the mapping.
    And as my experience, for both one to multiple message mapping & multiple steps interface mapping, it works well in my project. And
    in ID, you have to configure it via "advance" function in receiver determination or interface determination.
    Let me know if any confusion.
    Thanks
    Nick

  • Question regarding best practice

    Hello Experts,
    What is the best way to deploy NWGW?
    We recently architected a solution to install the 7.4 ABAP stack which comes with Gateway. We chose the Central Gateway HUB scenario in a 3 -tier setup. Is this all that's required in order to connect this hub gateway to the business systems ie ECC? Or do we have to also install the gateway add-on on our business system in order to expose the development objects to the HUB? I'm very interested in understanding how others are doing this and what has been the best way according to your own experiences. I thought creating a trusted connection between the gateway hub and the business system would suffice to expose the development objects from the business system to the hub in order to create the gateway services in the hub out of them? Is this a correct assumption? Happy to receive any feedback, suggestion and thoughts.
    Kind regards,
    Kunal.

    Hi Kunal,
    My understanding is that in the HUB scenario you still need to install an addon in to the backend system (IW_BEP). If your backend system is already a 7.40 system then I believe that addon (or equivalent) should already be there.
    I highly recommend you take a look at SAP Gateway deployment options in a nutshell by Andre Fischer
    Hth,
    Simon

  • Question on best practice/optimization

    So I'm working with the Custom 4 dimension and I'm going to be reusing the highest member in the dimension under several alternate hierarchies. Is it better to drop the top member under each of the alternate hierarchies or create a single new member and copy the value from the top member to the new base one.
    Ex:
    TotC4
    --Financial
    -----EliminationA
    ------EliminationA1
    ------EliminationA2
    -----GL
    -------TrialBalance
    -------Adjustments
    --Alternate
    ----AlternateA
    -------Financial
    -------AdjustmentA
    -----AlternateB
    -------Financial
    -------AdjustmentB
    In total there will be about 8 Alternate Adjustments(it's for alternate trasnlations if you're curious).
    So should I repeate the entire Financial Hierarchy under each alternate rollup, or just write a rule saying FinancialCopy = Financial. It seems like it would be a trade off between performance and database size, but I'm not sure if this is even substantial enough to worry about.

    You are better off to have alternate hierarchies where you repeat the custom member in question (it would become a shared member). HFM is very fast at aggregating the rollups. This is more efficient than creating entirely new members which would use rules to copy the data from the original member.
    --Chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Architecture design question: layers of components

    I am sooo confused about how to combine components, and when I should inherit, and when stuff goes in skins. I have a custom component Tile that looks like this:
    Source (simplified) looks like this:
    <s:SkinnableContainer skinClass="skins.TileSkin">
         <fx:Script>
              <![CDATA[
                   [Bindable] public var tileColor:uint = 0xFF0000;     // Base color of the tile.  The center area will be this color.
                   [Bindable] public var tileThickness:uint = 4;          // How high the tile appears to have its center part raised.
              ]]>
         </fx:Script>
    </s:SkinnableContainer>
    All the drawing happens in the skin.  I subclassed this component to have a LabeledTile:
    Source (simplified) looks like this:
    <components:Tile>
         <fx:Script>
              <![CDATA[
                   protected static const PADDING:int = 3;
                   public function get text1():String { return Label1.text; }
                   public function set text1( str:String ):void { Label1.text = str; }
              ]]>
         </fx:Script>
         <s:Label id="Label1" x="{tileThickness}" y="{tileThickness}"
              height="{height - 2 * tileThickness}" width="{width - 2 * tileThickness}"
              maxDisplayedLines="1" textAlign="center" verticalAlign="middle"
              paddingLeft="{PADDING}" paddingTop="{PADDING}" paddingRight="{PADDING}" paddingBottom="{PADDING}"/>
    </components:Tile>
    I want to have a subclass of LabeledTile called LabeledTileWithGizmo that looks like this:
    I thought this would work:
    <components:LabeledTile width="130" height="32" tileColor="0x0077EE"
                        contentCreationComplete="ContentCreated()">
         <fx:Script>
              <![CDATA[
                   private function ContentCreated():void
                        Label1.setStyle( "fontSize", 14 );
                        Label1.setStyle( "color", 0xFFFFFF );
                        Label1.setStyle( "fontFamily", "Trebuchet MS Bold, Arial, Helvetica, _sans" );
              ]]>
         </fx:Script>
         <components:Gizmo x="100" y="4" height="24"/>
    </components:LabeledTile>
    The gizmo shows up on the tile, but the label disappears.  If I put the Gizmo component right next to the Label component in the LabeldedTile, they both get drawn.  But I want to have labeled tiles that don't have gizmos, as well as ones that do.  I also want to have different types of gizmos.
    So, should I have one subclass of Tile with optional subcomponents?  And how would I do that?
    Should I be putting more of this in TileSkin?  And have the alternate components be states in the skin?
    Should I have a different skin for each of Tile, LabeledTile, and LabeledTileWithGizmo?  Should the second skin inherit from the first skin, and the third from the second?
    Can I have a skin for a component set styles on a subcomponent (e.g. having a LabeledTile set the fontSize, etc. on the Label subcomponent)?  Or is that even possible.
    I am sooo confused on how all these pieces should fit together.  Any insight would be appreciated.

    Your MXML for LabeledTile has Label1 as a child element in MXML.  When you subclass that class using MXML (<components:LabeledTile ...) then whatever child elements you put in that subclass will replace what you had declared in the LabeledTile.
    Sounds like what you want to do is subclass SkinnableContainer to have a label skin part and move your label into the skin.  That way when you add child elements to your container it won't replace the label with your child elements.
    This article is a good start to learn about spark skinning: http://www.adobe.com/devnet/flex/articles/flex4_skinning.html
    If you follow the approach above then here are some answers to your specific questions:
    So, should I have one subclass of Tile with optional subcomponents?  And how would I do that?
    >> You could make the Label an optional skin part so if someone didnt want the Label to show up then they would create a custom skin that didn't include it.  Another approach would be to consider exposing a showLabel property on your component that would control the visibility of the Label skin part.
    Should I be putting more of this in TileSkin?  And have the alternate components be states in the skin?
    >> Yes move the Label to the skin.  You could use states or expose a showLabel property as I mentioned above.
    Should I have a different skin for each of Tile, LabeledTile, and LabeledTileWithGizmo?  Should the second skin inherit from the first skin, and the third from the second?
    >> Sounds like you could do this all with one component and one skin by adding another custom skin part for the gizmo. Inheritance via MXML skins is not trivial to implement.
    Can I have a skin for a component set styles on a subcomponent (e.g. having a LabeledTile set the fontSize, etc. on the Label subcomponent)?  Or is that even possible.
    >> Yes you should be able to do that, just call setStyle on the skin part

  • Question on best practice....

    Friends,
    Final Cut Studio Pro 5/Soundtrack Pro 1.0.3
    Powerbook G4, 2GB Ram
    I have a DV session recorded over 6 hours that I need some assistance with. The audio for the session was recorded in two instances....via a conference "mic" plugged into a Marantz PDM-671 audio recorder onto compactflash (located in the front of the room by the presenter(s)) AND via the built-in mics on our Sony HDR-FX1 video camera. Needless to say, the audio recording on the DV tape is not very good (presenters' voice(s) are distant with lots of "noise" in the foreground), while the Marantz recording is also not great...but better.
    Since these two were not linked together or started recording at the same time, the amount/time of recording doesn't match. I'm looking for either of the following:
    (a) Ways to clean up or enhance the audio recording on the DV tape so that the "background" voices of the presenters are moved to the foreground and able to be amplified properly.
    OR
    (b) A software/resource that would allow me to easily match my separate audio recording from the Marantz to the DV tape video, so I could clean up the "better" of the two audio sources, but match the audio and video without having our speakers look like they're in a badly dubbed film.
    Any advice or assistance you could give would be great. Thanks.
    -Steve
    Steven Dunn
    Director of Information Technology
    Illinois State Bar Association
    Powerbook G4   Mac OS X (10.4.6)   2GB RAM

    Hello Steven,
    What I would do in your case since you have 6 hours is to edit the show with the audio off the DV camera. Then, as painfull as this will be, get the better audio from the recorder and sync it back up till it "phases" with the audio from the DV camera. One audio track will have the DV camera audio on it. Create another audio track and import the audio from the recorder and place it on the 2nd audio track. Find the exact "bite" or audio and match it to the start of the DV camera audio clip. Now slip/slid the recorder audio till the sound starts to "phase". This will take awile but in the end works when original camera audio is recorded from across the room. Good luck.

Maybe you are looking for

  • Why does iPhoto slow to a crawl

    every time I open iPhoto 9.4.3, not just iPhoto, but everything slows to a crawl. How can I fix that? Klaus MBP, 10.7.5

  • Notepad is not opening in my Mac

    1- In My Laptop notepad is not opening only search option showing 2- When i open and click outlook 2011 Tool- Accounts- Email Account setting than my Outlook became none operational. help please

  • Custom Waveform or XY Graph Control

    Hi I need a special Waveform Graph or XY-Graph Control in LV2009 Take a look at the following picture or attachment. There are 12 channels and all channels have same X-axis (actually Time Axis) But each channel has different and "separate"  Y-axis (A

  • I am using Diadem V10.1. I am trying to Export the pdf report through my script.

    I am using Diadem V10.1. I am trying to Export the pdf report through my script. When I try to export individual report manullayFile -> Pdf Export it creates a proper plot in pdf. But I try to get multiple reports from script it, I can see only 1 qua

  • Disable scrolling in certain apps. Magic Mouse?

    Is there any way to disable scrolling only on certain application on a Magic Mouse. I love the mouse and my beautiful i5 27" iMac. The trouble is I work with Adobe InDesign all the time and I am constantly accidentally scrolling the pages. I hate thi