Lync backend databases and Lync local databases placement best practices

We are deploying Lync 2013 for 30,000 Lync users across 2 pools in 2 datacenters. Dedicated SQL server for Lync to host all databases for FE, PC and monitoring roles.
Can anyone provide guidance around disk drives for SQL databases and local databases?
Lync backend databases
Planning for Lync Server 2013 requires critical thinking about the performance impact that the system will have on your current infrastructure. A
point of potential contention in larger enterprise deployments is the performance of SQL Server and placement of database and log files to ensure that the performance of Lync Server 2013 is optimal, manageable, and does not adversely affect other database
operations.
http://blogs.technet.com/b/nexthop/archive/2012/11/20/using-the-databasepathmap-parameter-to-deploy-lync-server-2013-databases.aspx
Is it recommended to place all Lync DBs on one drive and logs on one drive or separate onto multiple drives using Databasemappath?
Lync 2013 local databases 
In the Capacity Planning Guidance Microsoft
describes that the disk requirements for a Lync 2013 front end server, given our usage model, is eight disks configured as two drives.
One drive will use two disks in RAID 1 to store the Windows operating system and the Lync 2013 and SQL Express binaries
One drive will use six disks in RAID1+0 to store databases and transaction logs from the two SQL Express instances (RTCLOCAL and LYSS) 
Is this enough or should we separate the local DBs and logs onto separate drives as well? Also how big do the local DBs get?

During the planning and deployment of Microsoft SQL Server 2012 or Microsoft SQL Server 2008 R2 SP1 for your Lync Server 2013 Front End pool, an important consideration is the placement of data and log files onto physical hard disks for performance. 
The recommended disk configuration is to implement a 1+0 RAID set using 6 spindles. Placing all database and log files that are used by the Front End pool and associated server roles and services (that is, Archiving and Monitoring Server, Lync Server Response
Group service, Lync Server Call Park service) onto the RAID drive set using the Lync Server Deployment Wizard will result in a configuration that has been tested for good performance. The database files and what they are responsible for is detailed in the
following table.
http://technet.microsoft.com/en-us/library/gg398479.aspx
Microsoft's technet recommendation contradicts the blog recommendation for Lync 2010.

Similar Messages

  • ASM and Databases Instances (best practices)

    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Any comment, advises ?
    Bests Regards
    Den

    user12067184 wrote:
    Hello
    Platform AIX 5 TL8,
    Oracle 10.2.0.4
    Context RAC ASM (2 nodes)
    We have 25 Oracle databases working under the same ASM Instance on our RAC. I think that this is too much and split at least by creating a new ASM instance on another RAC environment should be better.
    Hi Den ,
    It is not advisable to have 25 databases in single RAC . Instead of databases , you can also think of creating different schemas in same database.
    For ASM best practice please follow :
    ASM Technical Best Practices [ID 265633.1]
    Regards
    Rajesh

  • Database Administration - Best Practices

    Hello Gurus,
    I would like to know various best practices for managing and administering Oracle databases. To give you all an example what I am thinking about - for example, if you join a new company and would like to see if all the database conform to some kind of standard/best practices, what would you look for - for instance - are the control files multiplexed, are there more than one member for each redo log group, is the temp tablespace using TEMPFILE or otherwise...something of that nature.
    Do you guys have some thing in place which you use on a regular basis. If yes, I would like to get your thoughts and insights on this.
    Appreciate your time and help with this.
    Thanks
    SS

    I have a template that I use to gather preliminary information so that I can at least get a glimar of what is going on. I have posted the text below...it looks better as a spreedsheet.
    System Name               
    System Description               
         Name      Phone     Pager
    System Administrator               
    Security Administrator               
    Backup Administrator               
    Below This Line Filled Out for Each Server in The System               
    Server Name               
    Description (Application, Database, Infrastructure,..)               
    ORACLE version/patch level          CSI     
              Next Pwd Exp     
    Server Login               
    Application Schema Owner               
    SYS               
    SYSTEM               
         Location          
    ORACLE_HOME               
    ORACLE_BASE               
    Oracle User Home               
    Oracle SQL scripts               
    Oracle RMAN/backup scripts               
    Oracle BIN scripts               
    Oracle backup logs               
    Oracle audit logs               
    Oracle backup storage               
    Control File 1               
    Control File 2               
    Control File 3                    
    Archive Log Destination 1                    
    Archive Log Destination 2                    
    Datafiles Base Directory                    
    Backup Type     Day     Time     Est. Time to Comp.     Approx. Size
    archive log                    
    full backup                    
    incremental backup                    
    As for "Best" practices, well I think that you know the basics from your posting but a lot of it will also depend on the individual system and how it is integrated overall.
    Some thoughts I have for best practices:
    Backups ---
    1) Nightly if possible
    2) Tapes stored off site
    3) Archives backed up through out day
    4) To Disk then to Tape and leave backup on disk until next backup
    Datafiles ---
    1) Depending on hardware used.
    a) separate datafiles from indexes
    b) separate high I/O datafiles/indexes on dedicated disks/lungs/trays
    2) file names representative of usage (similar to its tablespace name)
    3) Keep them of reasonable size < 2 GB (again system architecture dependent)
    Security ---
    At least meet DOD - DISA standards where/when possible
    http://iase.disa.mil/stigs/stig/database-stig-v7r2.pdf
    Hope that gives you a start
    Regards
    tim

  • Developers access to apps database user(Best Practice)?

    Hello all,
    I'm trying to understand a developers need to have access to the "apps" database user. The arguent can be made that the need is there for all development efforts, but is that really the case? Should all updates/changes by the "apps" user be executed by the apps dba even in the development environment? I'm trying to get a better understanding of how other organizations are set up. Our shop currently allow developers free raign to the apps user, but there are known security risks in doing so with "apps" being such a powerful user of the database environment.
    Thanks in advance and any recommendations(Best Practices) will be greatly appreciated.

    Hi,
    We usually give developers access to APPS schema on the development instances. The access to this schema (and other schemas) is revoked on the UAT/PROD (and other instances where the data is not sanitized). When giving such access we are not much worried about the data as much as we are about the objects, but this is not a big issue as we can recover those objects from other instances. Some organizations do not provide their developers with access to APPS schema on the development instances and all the tasks are done by the Apps DBA team. Another approach would be creating a read only APPS schema (search the forum for details) to allow developers to view the objects/data without bugging the DBAs with such routine tasks.
    Thanks,
    Hussein

  • Database creation best practices.

    Hi,
    We are planning to setup new database, oracle-10G on Sun and AIX. It is a datawarehose environment.
    Can anyone please share me the documents which speaks about best practices to be followed during database creation/setup. I googled and got some douments but not satisfied with them, so thought of posting this query.
    Regards,
    Yoganath.

    YOGANATH wrote:
    Anand,
    Thanks for your quick response. I went thru the link, but it seems to be a brief one. I need a sort of crisp/summary document for my presentation, which speaks about:
    1. Initial parameter settings for an datawarehouse to start with, like block_size, db_file_multiblock_read_count, parallel server etc...
    2. Memory parameters, SGA, PGA (say for an sever with 10GB RAM).
    3. How to split tablespaces, like Large, small.
    If someone has a just a crisp/outline document which speaks about the above mentioned points, it will be grateful.
    Regards,
    YoganathYou could fire up dbca, select the 'data warehouse' template, walk through the steps, and at the end do not select 'create a database' but simply select 'create scripts', then take a look at the results, especially the initialization file. Since you chose a template instead of 'custom database' you won't get a CREATE DATABASE script, but you should still get some stuff genned that will answer a lot of the questions you pose.
    You could even go so far as to let dbca create the database. Nothing commits you to actually using that DB. Just examine it to see what you got, then delete it.
    Edited by: EdStevens on Feb 10, 2009 10:41 AM

  • [iPhone SDK] Database/SQLite Best Practices

    Taking a look at the SQLiteBooks example, the idea of "hydrating" objects on-the-fly doesn't seem like a best practice to me. If you have a large number of books, for example, and they all need to be hydrated at the same time, I would want to lump the hydrating statements into one large query instead of each Book doing a separate query for it's own data.
    I'm imagining these two scenarios:
    1. A large query that loops through the rows of it's result set and creates Book objects from them, adding them to an array.
    2. A query to get the primary keys, creating Book objects with only that data. When the book hydrates, it queries the rest of the data specifically for itself. (SQLiteBooks example)
    I can see how the hydrating of method 2 would be easier to manage the memory stored in each Book object, but I'm really concerned with the speed it would take hundreds of Books to hydrate themselves.

    If you know you're going to need many of the objects hydrated at the same time you may want to create some mechanism that allows you to do a single query and hydrate everything at once. In the case of SQLiteBooks, the hydrate method is only called when the user clicks through to the details view for a particular item, so hydrating objects en masse is not necessary.
    I'd be very careful to avoid premature optimization here. We're working with very limited resources on these devices, and it is definitely important to optimize your code, but if you're dealing with a large number of items you could easily shoot yourself in the foot by hydrating a bunch of items at once and using up all your memory. If you dehydrate your objects when you receive a memory warning (as I believe the SQLiteBooks example does) you could end up thrashing - reading all your data from the DB, hydrating your objects, receiving a memory warning, dehydrating, and repeating. As the previous reply states, indexed lookups in SQLite are extremely fast. Even on the iPhone hardware you can probably do hundreds of lookups per second (I haven't run a test, but that'd be my guesstimate).

  • Multiple IPs and Outbound IP on 2008, best practice suggestion...

    Hello,
    I need a suggestion on an issue;
    I have a Windows 2008 R2 SP1 Std. Ed. I have 3 IPs for that server, each of them uses the same gateway. By design the IP which is closest to the gateway is the default outbound IP on W2K8_R2_SP1_SE.
    I want to choose any other IP out of other 2 assigned IPs as default outbound one.
    example:
    GATEWAY: 10.0.0.1
    IP1: 10.0.0.2 (default outbound by design)
    IP2: 10.0.0.3 (the one I want it to be default outbound)
    IP3: 10.0.0.4 (not important)
    There are basically 2 choices available to me doable right now. Can you please take a moment and suggest one of the solutions below or state if you know the best practice for such a case? Thank you very much in advance =)
    First Solution:
    apply this command: Netsh int ipv4 add address 12 10.0.0.1 255.x.x.x skipassource=true
    then apply these 3 hotfixes:
    IP addresses are still registered on the DNS servers even if the IP addresses are not used for outgoing traffic on a computer that is running Windows 7 or Windows Server 2008 R2
    http://support.microsoft.com/kb/2386184
    The "skipassource" flag of IP addresses is cleared after you use the GUI to change IP settings of a network adapter in Windows 7 or in Windows Server 2008 R2
    http://support.microsoft.com/kb/2554859
    FIX: IIS Manager does not display IP addresses that are assigned to the network adapter together with the skipassource flag
    http://support.microsoft.com/kb/2551090
    Second Solution:
    Simply create 2 interfaces. Use the first one with the IP that I want to be as outbound default, dump all other IPs to the second interface. 2 interfaces will have the same gateway but Windows will assume the first one as the outbound default.

    I believe you want to set the metric on the interfaces.
    You can do this by altering your routing table with
    route.exe or alternatively, you can change the interface metric in the TCP/IP advanced properties for your network adapter (via Control Panel). By default it uses an automatic metric (i.e. Windows chooses which interface to use).
    For your reference (and the reference of anyone else facing a similar challenge), the metric is a weighted value Windows will use to determine which interface to use for a particular endpoint. Here is the definition from the route.exe documentation:
    metric   Metric   : Specifies
    an integer cost metric (ranging from 1 to 9999) for the route, which is used when choosing among multiple routes in the routing table that most closely match the destination address of a packet being forwarded. The route with the lowest metric is chosen. The
    metric can reflect the number of hops, the speed of the path, path reliability, path throughput, or administrative properties.
    Jason Warren
    @jaspnwarren
    jasonwarren.ca
    habaneroconsulting.com/Insights

  • NX7K M and F series mixed chassis best practice?

    I have NX7010 chassis' with mixed M and F series line cards. Fucntions to implement include VDC, vPC, VRF, and L3 routing. What are the best practices for mixed chassis? I once saw a Cisco document talking about it but couldn't find it right now.
    Thanks in advance                 

    Understand Layer 3 functions should be performed by M1 ports. But if I use F2e 10G ports to build a trunk between 2 NX7K, then use a SVI to make Layer 3 adjacency across this trunk connection, is it ok?
    I can use M1 1G ports to build this trunk too but I would prefer F2e 10G ports if this is ok.
    Thanks

  • Question about database structure - best practice

    I want to create, display, and maintain a data table of loan
    rates. These
    rates will be for two loan categories - Conforming and Jumbo.
    They will be
    for two loan terms - 15year and 30year. Within each term,
    there will be a
    display of -
    points (0, 1, 3) - rate - APR
    For example -
    CONFORMING
    30 year
    POINTS RATE APR
    0 6.375 6.6
    1 6.125 6.24
    3 6.0 6.12
    My first question is -
    Would it be better to set up the database with 5 fields
    (category, term,
    points, rate, apr), or 13 fields (category, 30_zeropointRate,
    30_onepointRate, 30_threepointRate, 30_zeropointAPR,
    30_onepointAPR,
    30_threepointAPR, 15_zeropointRate, 15_onepointRate,
    15_threepointRate,
    15_zeropointAPR, 15_onepointAPR, 15_threepointAPR)?
    The latter option would mean that my table would only contain
    two records -
    one for each of the two categories. It seems simpler to
    manage in that
    regard.
    Any thoughts, suggestions, recommendations?
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================

    Thanks, Pat. I'm pretty sure that this is a dead-end
    expansion. The site
    itself will surely expand, but I think this particular need
    will be
    informational only....
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    ==================
    "Pat Shaw" <[email protected]> wrote in message
    news:[email protected]...
    > But if the site ever wants to expand on it's
    functionality etc. it can be
    > very difficult to get round a de-normalised database.
    You can find that
    > you have tied yourself in knots and the only solution is
    to go back and
    > redesign the database which often includes major
    redesigning of the
    > fron-end too.
    >
    > If you are confident that this will not be the case then
    go with your
    > initial thoughts but don't be too lenient just in case.
    Leave yorself a
    > little scope. I always aim for 3rd normal form as this
    guarantees a robust
    > database design without being OTT.
    >
    > Pat.
    >
    >
    > "Joris van Lier" <[email protected]> wrote in
    message
    > news:[email protected]...
    >>
    >>
    >> "Murray *ACE*"
    <[email protected]> wrote in message
    >> news:[email protected]...
    >>> I want to create, display, and maintain a data
    table of loan rates.
    >>> These rates will be for two loan categories -
    Conforming and Jumbo.
    >>> They will be for two loan terms - 15year and
    30year. Within each term,
    >>> there will be a display of -
    >>>
    >>> points (0, 1, 3) - rate - APR
    >>>
    >>> For example -
    >>>
    >>> CONFORMING
    >>> 30 year
    >>> POINTS RATE APR
    >>> ----------- --------- ------
    >>> 0 6.375 6.6
    >>> 1 6.125 6.24
    >>> 3 6.0 6.12
    >>>
    >>> My first question is -
    >>>
    >>> Would it be better to set up the database with 5
    fields (category, term,
    >>> points, rate, apr), or 13 fields (category,
    30_zeropointRate,
    >>> 30_onepointRate, 30_threepointRate,
    30_zeropointAPR, 30_onepointAPR,
    >>> 30_threepointAPR, 15_zeropointRate,
    15_onepointRate, 15_threepointRate,
    >>> 15_zeropointAPR, 15_onepointAPR,
    15_threepointAPR)?
    >>>
    >>> The latter option would mean that my table would
    only contain two
    >>> records - one for each of the two categories. It
    seems simpler to
    >>> manage in that regard.
    >>>
    >>> Any thoughts, suggestions, recommendations?
    >>
    >> In my opinion, normalizing is not necessary with
    small sites, for example
    >> the uber-normalized database design I did for the
    telcost compare matrix
    >> (
    http://www.artronics.nl/telcostmatrix/matrix.php
    ) proved to be totally
    >> overkill.
    >>
    >> Joris
    >
    >

  • Windows Azure SQL Databases Statistics Best practices

    ON SQL Azure is it a good practice to have Statistics auto-update disabled? or otherwise.
    Pl do not compare Azure to on premise SQL Engine.. Those who have worked on Azure know what i mean..
    It is a pain to maintain the indexes specially if they are have BLOB Type Columns.. No Index online rebuilds are allowed in Azure. I was targetting statistics as i see the data being frequently updated.. so maybe i can have the developers update the stats
    as soon as they do major update/insert or i can have a job that can do it on weekly basis if i turn off the suto stats update.
    I execute a Stats FULLSCAN Update every week, but i think it is overwritten by the Auto update stats .. So Now back to my question does anyone have any experience with turning off stats on Azure.. (any Benefits)

    You can't disable auto stats in WASD.  They're on by default and have to stay that way.
    Rebuilding indexes is possible, but you have to be careful how you approach it.  See my blog post for rebuilding indexes:
    http://sqltuna.blogspot.co.uk/2013/10/index-fragmentation-in-wasd.html
    As a rule I wouldn't have LOB columns as part of an index key - is that what's causing you issues?
    Statistics work the same as on-premises, in that they are triggered when a certain threshold of changes is reached (or some other triggers).  That's not a bad thing though, as it means they're up to date.  Is there any reason you think this is
    causing you issues?

  • CF Standard, MS Exchange and SQL on one computer - best practices?

    After attending Ryan Favro's session on vmware, I realized my
    one box should function to it's full capacity (it's a dual physical
    processor Supermicro/AMD Opteron w/ 8 hotswap Raptors, expandable
    to a quad 2.6Ghz/8Gb RAM).
    I had planned on running CF Standard (bought it), SQL
    Express, and in fact all of my development stuff (Flex/Photoshop
    CS2, which I also bought)) on this box, but that idea blew up when
    I realized my PNY Quadro wouldn't run under Server 2003SBS (which,
    you guess it, I also bought LOL). So, I've stripped out the
    GUI-stuff - it's just going to be a server (web/email and database)
    now. Webserver for my own sites, so no reason to spend money I
    don't have on Enterprise CF. I will probably run a bunch of stuff
    for friends under my domain in folders, i.e.,
    faceitphoto.ca/mystuff, faceitphoto.ca/friend1stuff,
    faceitphoto.ca/friend3 stuff...not more advanced than that. They
    can forward/cloak however they see fit at their expense.
    I see SQL Is available in a version that offers web services
    - is that valid to me at all? Or am I good with just the MS SQL
    Express?
    Is MS Exchange going to function fine as the email server in
    CF, i.e., can I point it as the source in Administrator? I have the
    MX entry or whatever it's called from my ISP for faceitphoto.ca
    pointed to my static IP.
    Most importantly, as per the title, how would you recommend
    setting this up? Should I buy XP Pro and run SQL under it, and CF
    under my Server 2003 with the mailserver (Exchange) running in the
    same environment? Is vmware the answer here?
    Any other suggestions? I'm pretty much out of money after the
    laptop comes...
    And yes, I'm a bit out of my league here, so I will take the
    recommendations to someone who knows what they are doing. I want
    the OS/Apps drives all Ghosted when it's done (running, configured,
    current updates applied). This isn't for a business (yet), so
    please keep your $$$ recommendations with that in mind...but I am
    very serious about trying to buildng something I COULD turn into a
    small business down the road, i.e., I like to build photo galleries
    and have 'informal' expectations at work to maybe cobble together a
    dashboard with CF/SQL/Flex2/FDS2, and a few mgmt/reporting areas
    (they'd run it from there, this would be a development
    environment). Who knows from there...
    Shawn

    hi
    "I see SQL Is available in a version that offers web services
    - is that valid to me at all? Or am I good with just the MS SQL
    Express?"
    >Sql express is just fine for start, if you are not happy
    stripped sql you can allways start to use other version and import
    db's there.
    "Is MS Exchange going to function fine as the email server in
    CF, i.e., can I point it as the source in Administrator? I have the
    MX entry or whatever it's called from my ISP for faceitphoto.ca
    pointed to my static IP."
    >yes you can. you can still use your isp mailserver if you
    dont want to use/maintain exchange.
    "Most importantly, as per the title, how would you recommend
    setting this up? Should I buy XP Pro and run SQL under it, and CF
    under my Server 2003 with the mailserver (Exchange) running in the
    same environment? Is vmware the answer here?"
    >same enviroment is fine. you can spend $ to dedicated sql
    box but why?
    you can use wmvare too, it up to yours how you want to
    maintain you dev/prod enviroment.
    "I want the OS/Apps drives all Ghosted when it's done
    (running, configured, current updates applied)."
    >set some raid to your disks like 2 system disks and rest
    for data and/or apps.
    wmvare and you can have easy machine images, it make life
    easy when troubles come..
    if you do not select wmvare you can use symantec ghost
    software to create images
    select some backup software and maybe external hd (network
    capable) where you do your daily backups
    and firewall to protect your stuff!
    cheers
    kim

  • Design Choices and is LiveCycle needed? best practices for using RTMP/AMF over HTTP/XML communicatio

    Hi,
    I am new to flex/RIA. I am exploring different design choices especially in client server communication. On client side we will be using Flash based RIA (using Actions scripts).
    There will be some simple forms (like for login, registration, payments etc) and some simple reports including with several graphs and charts. Each chart might have 1000 to 1500 data points etc. There are not video or audio content as such. On server side we have Servlets, java API and some EJBs to provide the business logic and real time prices/content (price update is usually every 10 seconds) /data. Some of the content will be static as well.
    I have following questions in my mind. Is it worth it to use RTMP/AMF channels for the followings?
    1. For simple forms processing (Mapping Actions scripts classes to Java classes). Like to display/retrieve/update data for/from registration forms.
    a. If yes, why? Am I going to be stuck with LCDS? Is it worth it? What could be the cons for heavy usage/traffic scenarios
    b. If not what are the alternates? Should I create the web services? Or only servlets are sufficient (ie. Only HTTP+Java based server side with no LCDS+RTMP+AMF)? All forms need to communicate on secure channel.
    2. For pushing the real time prices/content which we may need to update every 15 seconds on user interface using graphs and charts. Can I do it with some standard J2EE/JMS way with RIA (Flex) on front-end? i.e. Flash application will keep pulling data from some topic. Data can be updated after few secs or few minutes which cant be predicted.
    3. Are there any scalability issues for using RTMP? What happens if concurrent users increase 10 times within a year?
    4. What are the real advantages of using RTMP/AMF instead of simple HTTP/HTTPS probably using xml based objects
    5. Do I need to use LCDS if I am using AMF only on client side? Basically I mean if I am sending an object in form of xml from a servlet. Can some technology in Flash (probably AMF) in client side map it an Action script object?
    6. What are the primary advantages of using LCDS in a system? Is there any alternate solutions? Can I use some standard solutions for data push technologies?
    I would like that my server side implementation can be used by multiple types of clients e.g. RIA browser based, mobile based, third party software (any technology) etc.
    I appreciate if you can kindly refer me to some reading materials which can help me deciding the above. If this is not the right place to post this message then please do refer me to the place where I can post such questions.
    Thanks and Kind regards,
    Jalal

    Hi Jalal,
    Let me see if I can help with some of your questions
    1. Yes, you can use LCDS for simple forms processing. Any time you want to
    move data between the Flex client and the server, LCDS (or its free Open
    source cousin BlazeDS) is going to help. I would expect you would use the
    mx:RemoteObject MXML tag to invoke server side code, passing it the form
    data input by the application user.
    2. If you need to push near real-time data, LCDS gives you the RTMP channel
    which can scale quite nicely. You can then use the mx:Consumer MXML tag to
    subscribe the clients to the messages, which can come from almost anywhere,
    include JMS topics or queues.
    3. RTMP (included in LCDS) is the best option for scaling to tens of
    thousands of users and the LCDS servers can be clustered to proved better
    scaling.
    4. The AMF3 protocol used over the RTMP channels performs much faster than
    simple XML over HTTP. See this blog posting for some tests:
    http://www.jamesward.org/census/.
    5. If you are sending a Flex application XML, then I would recommend using
    the E4X API to work with the XML. This is a pretty nice and powerful way to
    work with XML. If you want Actionscript objects (and probably better
    performance), then using AMF serialization to Actionscript objects is the
    way to go.
    6. Primary advantages? There are many, but mainly you can avoid thinking
    about the plumbing and concentrate on solving your application and business
    logic problems.
    Hope this helps you a little
    Tom Jordahl
    Adobe

  • Locale Resource Bundle Best Practice

    Hi
    I have a Flex application that loads it's locale resource bundle from a service.
    So when the httpservice loads the bundle it populates an instance of a class "I18NBundle" that contains all the bundle properties. For instance you could do:
    i18nBundle.hello_message and it would return "Hello"
    I'm currently using Cairngorm for this project so this i18nBundle instance it's on the model locator.
    What I'm seeing and that I don't like is that for a component be able to get the bundle it must access to the model locator and then to the i18nBundle.
    Instead of that what I would like is that each component doesn't rely on this for getting the bundle.
    I guess I could create a "bundle" property on each component class and then pass it the reference to the bundle when it is instanciated. It seems it could be messy and a difficult task to initialize this property in some cases for instance on a datagrid cell renderer.
    Other could be transforming the I18NBundle class into a singleton and then when the service response it's recieved the singleton it's initialized. Since all application components should/must access the same locale bundle I guess this could be a better option.
    What do you think about this?
    Do you think there is some a better way to achieve this with Flex?
    Any opinion or recommendation would be appreciatted.
    (I don't wish to use the Adobe Flex proposal of having the properties file on the Flex project and the compiling them into swf.)
    thanks in advance.
    Polaco.
    ps: If you think I haven't expressed myself correctly please let me know and I will rewrite it.

    I have managed to extend IResourceBundle and added it to ResourceManager.
    The only problem now is that my bundle does not represent a language and it doesn't need a name either.
    Since the localization part is done on the webapplication and then the apporpiate bundle returned in the request's response.
    So it's locale attribute value is "".
    And it's name is "".
    I can display a property correctly if I use the following code:
    (resourceManager.getResourceBundle('','')).content.helloMessage
    but I doesn't work if I try to retrieve it like:
    resourceManager.getString("", "helloMessage");
    any ideas ?
    thanks

  • Use both iPhoto and Aperture with one library-best practice?

    I'd like to use both iPhoto and Aperture, but have both programs use/update just one photo library.  I have the latest versions of both programs, but was wondering if the optimum approach would be to:
    a)point Aperture to the existing iPhoto library and use that as the library for both programs
    or
    b)import the entire iPhoto library into a new Aperture library, delete the iPhoto library, and point iPhoto to use the Aperture library.
    I should point out that up to now I've been using iPhoto exclusively, and have close to 20K photos in the iPhoto library, tagged with Faces, organized into various albums, etc; if that makes a difference...
    Appreciate any advice!
    Thanks,
    Dave

    Thanks Frank!  I'll try it that way.
    Appreciate the help!

  • Managing VMs in Azure and AWS : What's the best practice?

    Hi guys,
      I'm doing some research to understand what my options are when it comes to managing some machines in both Azure and AWS.  I'd ideally like to treat each web service as a separate physical office site, demarking boundaries and putting a Distribution
    Point (or maybe a secondary, based on the bandwidth costs) up in each to handle content distribution.  The VMs within these services are routable, as in I can ping them and they can ping back to the core datacenter where I plan to put the primary.  
      Have you guys done something like this before?  How did it work out?  How would you recommend I approach managing less than 50 Azure + AWS VMs?  I'm trying to keep this infrastructure as simple as is possible, and currently only
    has one Primary site.
      Thanks!
    If this post was helpful, please vote up or 'Mark as Answer'! More of this sort of thing at www.foxdeploy.com

    Managing Azure VMs from an on-prem ConfigMgr instance is supported and should work no problem assuming you have a VPN connection set up (which it sounds like you do):
    http://support.microsoft.com/kb/2889321
    There really is nothing special here. It's just network traffic and the VPN makes the actual physical location of the target managed system irrelevant. As long as the client can communicate with the MP, DP, and WSUS instance on the normal ports (80, 8530,
    and 10123 by default) then it'll work.
    Same goes with Amazon. Although of course it isn't specifically supported by Microsoft, neither is your internal networking infrastructure.
    Jason | http://blog.configmgrftw.com | @jasonsandys

Maybe you are looking for

  • Cant download DVD4 driver.

    I havent included my comp specs yet because this is a simple question. I installed a DVD=RW=R yesterday, it worked fine until I used the update ( 130 I think ) then both my DVD drivers were wiped from my comp so I want reinstall them but when I click

  • Unknown Error 4010

    After last iTunes update, I am now getting popup with Uknown Error 4010 about 1-2 times PER HOUR. DId not start until a few days ago after last update. I am sick of this.

  • Oracle 9i and Xeon MP 64bit processors

    Is Oracle9i Compatible with Xeon MP 64bit processors? I read from a june 2005 post that they are not, but maybe that has changed, although I can't find any documentation or drivers which point to that. Secondly, the Oracle 10g drivers. These work, an

  • Internal Reconciliation - some documents not reconciling

    Hello, We've recently begun using the tool: Business Partners -> Internal Reconciliation -> Reconciliation. We select: Automatic All of our Customer Business Partners in the range selection Matching Rule 1 - Ref.2  (which is ORDR, NumAtCard), which w

  • WebCenter Forms Recognition Design Training

    I'm looking for any training materials for WebCenter Content Forms Recognition Designer.  Specifically I'm looking for anything that will help me figure out how everything ties together.  Anything that's a sample of how to develop from scratch a simp