Best practice in related to EWA

Hi folks,
Is there any Best practice to activate the EWA reports? I mean, SAP recommend to activate EWA report for every single systems (DEV, QA and PRD)....
If anyone has an OSS Note which says about that it will be very helpful!
Tks a lot!

Hello,
This is the best practices document for activating EarlyWatch Alerts.
https://websmp101.sap-ag.de/~sapidb/011000358700001873212008E
You can generate an EWA for any system that meets the criteria of Note 1257308 - FAQ: Using EarlyWatch Alert
Typically Customers do not run EWA reports on systems other than QAS and PRD, but you can create one for any
system type. However the products must meet the requirements in Note 1257308.
Also, products must be defined correctly in SMSY, or you may find that you cannot generate the EWA.
In http://Service.sap.com/diagnostics > Media Library
You can find a Landscape setup guide, that can help you define the usages and technical systems for most products.
This should help.
Regards,
Paul

Similar Messages

  • Best practice "changing several related objects via BDT" (Business Data Toolset) / Mehrere verbundene Objekte per BDT ändern

    Hallo,
    I want to start a
    discussion, to find a best practice method to change several related master
    data objects via BDT. At the moment we are faced with miscellaneous requirements,
    where we have a master data object which uses BDT framework for maintenance (in
    our case an insured objects). While changing or creating the insured objects a
    several related objects e.g. Business Partner should also be changed or
    created. So am searching for a best practices approach how to implement such a
    solution.
    One Idea was to so call a
    report via SUBMIT AND RETURN in Event DSAVC or DSAVE. Unfortunately this implementation
    method has only poor options to handle errors. Second it is also hard to keep LUW
    together.
    Another idea is to call an additional
    BDT instance in the DCHCK-event via FM BDT_INSTANCE_SELECT and the parameters
    iv_xpush_classic = ‘X’ and iv_xpop_classic = ‘X’. At this time we didn’t get
    this solution working correctly, because there is always something missing
    (e.g. global memory is not transferred correctly between the two BDT instances).
    So hopefully you can report
    about your implementations to find a best practice approach for facing such
    requirements.
    Hallo
    ich möchte an der Stelle eine Diskussion starten um einen Best Practice
    Ansatz zu finden, der eine BDT Implementierung/Erweiterung beschreibt, bei der
    verschiedene abhängige BDT-Objekte geändert werden. Momentan treffen bei uns
    mehrere Anforderungen an, bei deinen Änderungen eines BDT Objektes an ein
    anderes BDT Objekte vererbt werden sollen. Sprich es sollen weitere Objekte geänderte
    werden, wenn ein Objekt (in unserem Fall ein Versicherungsvertrag) angelegt
    oder geändert wird (zum Beispiel ein Geschäftspartner)
    Die erste unserer Ideen war es, im Zeitpunkt DSAVC oder DSAVE einen
    Report per SUBMIT AND RETURN aufzurufen. Dieser sollte dann die abhängigen Änderungen
    durchführen. Allerdings gibt es hier Probleme mit der Fehlerbehandlung, da
    diese asynchrone stattfinden muss. Weiterhin ist es auch schwer die Konsistenz der
    LUW zu garantieren.
    Ein anderer Ansatz den wir verfolgt hatten, war im Zeitpunkt
    DCHCK per FuBA BDT_INSTANCE_SELECT und den Parameter iv_xpush_classic = ‘X’ and
    iv_xpop_classic = ‘X’ eine neue BDT Instanz zu erzeugen. Leider konnten wir diese
    Lösung nicht endgültig zum Laufen bekommen, da es immer Probleme beim
    Übertragen der globalen Speicher der einzelnen BDT Instanzen gab.
    Ich hoffe Ihr könnt hier eure Implementierungen kurz beschreiben, dass wir
    eine Best Practice Ansatz für das Thema finden können
    BR/VG
    Dominik

  • Best Practice of Essbase Cube Join Relational Table

    Hi,
    As the topic mentioned, what's the best practice of Essbase cube join relational table?
    Please notice that I want description columns, not measure columns, from relational table.
    Thanks.

    As a starting point, check out what Ed wrote together a while ago:
    [https://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/10/24/are-essbase-and-oracle-bi-enterprise-edition-obiee-a-match-made-in-heaven.aspx|https://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/10/24/are-essbase-and-oracle-bi-enterprise-edition-obiee-a-match-made-in-heaven.aspx]
    [http://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/11/03/enhance-your-essbase-month-dimension-to-aid-sorting.aspx|http://community.altiusconsulting.com/blogs/altiustechblog/archive/2008/11/03/enhance-your-essbase-month-dimension-to-aid-sorting.aspx]
    Cheers,
    C.

  • Looking for customization best practices related to Timezone feature in EBS

    Hi all,
    We are updating our 11.5.10.2 env to deploy the TImezone feature and i'm anticipating some issues for the customized code as i'm not sure the developers followed best practices about handling dates.
    Is there any documents where i can find the best practices to follow for customized coding in order to be compliant with the EBS Timezone feature ?
    Like for instance, Date APIs usage or the work to do to convert Date data for interfaces ?
    Thanks in advance
    Chris

    Is there any documents where i can find the best practices to follow for customized coding in order to be compliant with the EBS Timezone feature ?
    Like for instance, Date APIs usage or the work to do to convert Date data for interfaces ?I do not think such a doc exists.
    You may review these docs and see if it helps.
    User Preferred Time Zone Support Guidelines for Applications 11i10CU2 [ID 330075.1]
    User-Preferred Time Zone Support in Oracle E-Business Suite Release 12 [ID 402650.1]
    Globalization Guide for Oracle Applications Release 12 [ID 393861.1]
    Also, please log a SR to confirm the existence of the doc you are looking for.
    Thanks,
    Hussein

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best practice for if/else when one outcome results in exit [Bash]

    I have a bash script with a lot of if/else constructs in the form of
    if <condition>
    then
    <do stuff>
    else
    <do other stuff>
    exit
    fi
    This could also be structured as
    if ! <condition>
    then
    <do other stuff>
    exit
    fi
    <do stuff>
    The first one seems more structured, because it explicitly associates <do stuff> with the condition.  But the second one seems more logical because it avoids explicitly making a choice (then/else) that doesn't really need to be made.
    Is one of the two more in line with "best practice" from pure bash or general programming perspectives?

    I'm not sure if there are 'formal' best practices, but I tend to use the latter form when (and only when) it is some sort of error checking.
    Essentially, this would be when <do stuff> was more of the main purpose of the script, or at least that neighborhood of the script, while <do other stuff> was mostly cleaning up before exiting.
    I suppose more generally, it could relate to the size of the code blocks.  You wouldn't want a long involved <do stuff> section after which a reader would see an "else" and think 'WTF, else what?'.  So, perhaps if there is a substantial disparity in the lengths of the two conditional blocks, put the short one first.
    But I'm just making this all up from my own preferences and intuition.
    When nested this becomes more obvious, and/or a bigger issue.  Consider two scripts:
    if [[ test1 ]]
    then
    if [[ test2 ]]
    then
    echo "All tests passed, continuing..."
    else
    echo "failed test 2"
    exit
    fi
    else
    echo "failed test 1"
    fi
    if [[ ! test1 ]]
    then
    echo "failed test 1"
    exit
    fi
    if [[ ! test2 ]]
    then
    echo "failed test 2"
    exit
    fi
    echo "passed all tests, continuing..."
    This just gets far worse with deeper levels of nesting.  The second seems much cleaner.  In reality though I'd go even further to
    [[ ! test1 ]] && echo "failed test 1" && exit
    [[ ! test2 ]] && echo "failed test 2" && exit
    echo "passed all tests, continuing..."
    edit: added test1/test2 examples.
    Last edited by Trilby (2012-06-19 02:27:48)

  • Unicode Migration using National Characterset data types - Best Practice ?

    I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
    The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
    Specific questions that I have are :
    1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
    2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
    3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
    in variable assignments - v_module_name := N'ABCD'
    in variable comparisons - IF v_sp_access_mode = N'DL'
    in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
    in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
    If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
    Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    NLS_CHARACTERSET = WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET = AL16UTF16

    ##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
    VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
    ##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
    No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
    ## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
    ## to keep the code/schemas consistent between the two databases
    First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
    ## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
    I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
    -- Sergiusz

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Best practices on number of pipelines in a single project/app to do forging

    Hi experts,
    I need couple of clarification from you regarding endeca guided search for enterprise application.
    1)Say for example,I have a web application iEndecaApp which is created by imitating the jsp reference application. All the necessary presentation api's are present in WEB-INF/lib folder.
    1.a) Do I need to configure anything else to run the application?
    1.b) I have created the web-app in eclipse . Will I be able to run it from the any thirdparty tomcat server ? If not then where I have to put the war file to successfully run the application?
    2)For the above web-app "iEndecaApp" I have created an application named as "MyEndecaApp" using deploymenttemplate. So one generic pipeline is created. I need to integrate 5 different source of data . To be precise
    i)CAS data
    ii)Database table data
    iii)Txt file data
    iv)Excel file data
    v)XML data.
    2.a)So what is the best practice to integrate all the data. Do I need to create 5 different pipeline (each for each data) or I have to integrate all the 5 data's in a single pipeline ?
    2.b)If I create 5 different pipeline then all should reside in a single application "MyEndecaApp" or I need to create 5 difference application using deployment template ?
    Hope you guys will reply it back soon..... Waiting for your valuable response ..
    Regards,
    Hoque

    Point number 1 is very much possible ie running the jsp ref application from a server of your choice.I havent tried that ever but will draw some light on it once I try.
    Point number 2 - You must create 5 record adapters in the same pipeline diagram and then join them with the help of joiner components. The resultant must be fed to the property mapper.
    So 1 application, 1 pipeline and all 5 data sources within one application is what should be the ideal case.
    And logically also since they all are related data, so must be having some joining conditions and you cant ask 5 different mdex engines to serve you a combined result.
    Hope this helps you.
    <PS: This is to the best of my knowledge>
    Thanks,
    Mohit Makhija

  • Best Practices when replacing 2003 server R2 with a new domainname and server 2012 r2 on same lan network

    I have a small office (10 computers with five users) that have a Windows 2003 server that has a corrupted AD. Their 2003 server R2 is essentially a file server and provides authentication.  They purchased a new Dell 2012 R2 server.  
    It seems easier to me to just create a new domain (using their public domain name).  
    But I need as little office downtime. as possible . Therefore I would like to promote this server to its new domain on the same lan as the current domain server.  I plan to manually replicate the users and folder permissions.  Once done, I plan to
    remove the old server from the network and join the office computers to the new domain.  
    They also they are also running a legacy application that will require some tweaking by another tech. I have been hoping to prep the new domain prior to new legacy tech arriving.  That is why I would like both domain to co-exist temporarily. I have read
    that the major issues involved in this kind of temporary configuration will then be related to setting up dns.  They are using the firewall to provide dhcp.
    Are there any best practices documents for this situation?
    Or is there a better or simpler strategy?
    Gary Metz

    I followed below two links. I think it should be the same even though the links are 2008 R2 migration steps.
    http://kpytko.pl/active-directory-domain-services/adding-first-windows-server-2008-r2-domain-controller-within-windows-2003-network/
    http://blog.zwiegnet.com/windows-server/migrate-server-2003-to-2008r2-active-directory-and-fsmo-roles/
    Hope this help!

  • Best Practices for Defining NDS Java Projects...

    We are doing a Proof of Concept on using NDS to develop non-SAP Java applications.  We are attempting to determine if we can replace our current Java development tools with NDS/WAS.
    We are struggling with SAP's terminology and "plumbing" for setting up/defining Java projects.  For example, what is and when do you define Tracks, Software Components, Development Components, etc.  All of these terms are totally foreign to us and do not relate to our current Java environment (at least not that we can see).  We are also struggling with how the DTR and activities tie in to those components.
    If any one has defined best practices for setting up Java projects or has struggled with and overcome these same issues, please provide us with some guidance.  This is a very frustrating and time-consuming issue for us.
    Thank you!!

    Hi Peggy,
    In Component Model we divide software projects into small components.Components can use other components in well defined manner.
    A development object is a part of a component that can be changed or developed in some way; it provides the component with a certain part of its functionality. A development object may be a Java class, a Web Dynpro view, a table definition, a JSP page, and so on. Development objects are always stored as “sources” in a repository.
    A development component can be defined as a frame shared by a number of objects, which are part of the software.
    Software components combine components (DCs) to larger units for delivery and deployment.
    A track comprises configurations and runtime systems required for developing software component versions.It ensures stable states of deliverables used by subsequent tracks.
    The Design Time Repository is for versioning source code management. Distributed development of software in teams. Transport and replication of sources.
    You can also find lot of support in SDN for the above concepts with tutorials.
    Refer this Link for a overview on Java development Infrastructure(JDI)
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/webas/java/java development infrastructure jdi overview.pdf
    To understand further
    Working with Net Weaver Development Infrastructure :
    http://help.sap.com/saphelp_nw04/helpdata/en/03/f6bc3d42f46c33e10000000a11405a/content.htm
    In the above link you can find all the concepts clearly explained.You can also find the required tutorials for development.
    Regards,
    Vijith

  • ASA 5505 Best Practice Guidance Requested

    I am hoping to tap into the vast wealth of knowledge on this board in order to gain some "best practice" guidance to assist me with the overall setup using the ASA 5505 for a small business client.  I'm fairly new to the ASA 5505 so any help would be most appreciated!
    My current client configuration is as follows:
    a) business internet service (cable) with a fixed IP address
    b) a Netgear N600 Wireless Dual Band router (currently setup as gateway and used for internet/WiFi access)
    c) a Cisco SG-500-28 switch
    d) one server running Windows Small Business Server 2011 Standard (primary Domain Controller)
         (This server is currently the DNS and DHCP server)
    e) one server running Windows Server 2008 R2 (secondary Domain Controller)
    f) approximately eight Windows 7 clients (connected via SG-500-28 switch)
    g) approximately six printers connected via internal network (connected via SG-500-28 switch)
    All the servers, clients, and printers are connected to the SG-500-28 switch.
    The ISP provides the cable modem for the internet service.
    The physical cable for internet is connected to the cable modem.
    From the cable modem, a CAT 6 ethernet cable is connected to the internet (WAN) port of the Netgear N600 router.
    A Cat 6 ethernet cable is connected from Port 1 of the local ethernet (LAN) port on the N600 router to the SG-500-28 switch.
    cable modem -> WAN router port
    LAN router port -> SG-500-28
    The ASA 5505 will be setup with an "LAN" (inside) interface and a "WAN" (outside) interface.  Port e0/0 on the ASA 5505 will be used for the outside interface and the remaining ports will be used for the inside interface.
    So my basic question is, given the information above of our setup, where should the ASA 5505 be "inserted" to maximize its performance?  Also, based on the answer to the previous question, can you provide some insight as to how the ethernet cables should be connected to achieve this?
    Another concern I have is what device will be used as the default gateway.  Currently, the Netgear N600 is set as the default gateway on both Windows servers.  In your recommended best practice solution, does the ASA 5505 become the default gateway or does the router remain the default gateway?
    And my final area of concern is with DHCP.  As I stated earlier, I am running DHCP on Windows Small Business Server 2011 Standard.  Most of the examples I have studied for the ASA 5505 utilize its DHCP functionality.  I also have done some research on the "dhcprelay server" command.  So I'm not quite sure which is the best way to go. First off, does the "dhcprelay server" even work with SBS 2011?  And secondly, if it does work, is the best practice to use the "dhcprelay" command or to let the ASA 5505 perform the DHCP server role?
    All input/guidance/suggestions with these issues would be greatly appreciated!  I want to implement the ASA 5505 firewall solution following "best practices" recommendations in order to maximize its functionality and minimize the time to implement.
    FYI, the information (from the "show version" command) for the ASA 5505 is shown below:
    Cisco Adaptive Security Appliance Software Version 8.4(7)
    Device Manager Version 7.1(5)100
    Compiled on Fri 30-Aug-13 19:48 by builders
    System image file is "disk0:/asa847-k8.bin"
    Config file at boot was "startup-config"
    ciscoasa up 2 days 9 hours
    Hardware:   ASA5505, 512 MB RAM, CPU Geode 500 MHz
    Internal ATA Compact Flash, 128MB
    BIOS Flash M50FW016 @ 0xfff00000, 2048KB
    Encryption hardware device : Cisco ASA-5505 on-board accelerator (revision 0x0)
                                 Boot microcode   : CN1000-MC-BOOT-2.00
                                 SSL/IKE microcode: CNLite-MC-SSLm-PLUS-2.03
                                 IPSec microcode  : CNlite-MC-IPSECm-MAIN-2.06
                                 Number of accelerators: 1
    0: Int: Internal-Data0/0    : address is a493.4c99.8c0b, irq 11
    1: Ext: Ethernet0/0         : address is a493.4c99.8c03, irq 255
    2: Ext: Ethernet0/1         : address is a493.4c99.8c04, irq 255
    3: Ext: Ethernet0/2         : address is a493.4c99.8c05, irq 255
    4: Ext: Ethernet0/3         : address is a493.4c99.8c06, irq 255
    5: Ext: Ethernet0/4         : address is a493.4c99.8c07, irq 255
    6: Ext: Ethernet0/5         : address is a493.4c99.8c08, irq 255
    7: Ext: Ethernet0/6         : address is a493.4c99.8c09, irq 255
    8: Ext: Ethernet0/7         : address is a493.4c99.8c0a, irq 255
    9: Int: Internal-Data0/1    : address is 0000.0003.0002, irq 255
    10: Int: Not used            : irq 255
    11: Int: Not used            : irq 255
    Licensed features for this platform:
    Maximum Physical Interfaces       : 8              perpetual
    VLANs                             : 3              DMZ Restricted
    Dual ISPs                         : Disabled       perpetual
    VLAN Trunk Ports                  : 0              perpetual
    Inside Hosts                      : 10             perpetual
    Failover                          : Disabled       perpetual
    VPN-DES                           : Enabled        perpetual
    VPN-3DES-AES                      : Enabled        perpetual
    AnyConnect Premium Peers          : 2              perpetual
    AnyConnect Essentials             : Disabled       perpetual
    Other VPN Peers                   : 10             perpetual
    Total VPN Peers                   : 12             perpetual
    Shared License                    : Disabled       perpetual
    AnyConnect for Mobile             : Disabled       perpetual
    AnyConnect for Cisco VPN Phone    : Disabled       perpetual
    Advanced Endpoint Assessment      : Disabled       perpetual
    UC Phone Proxy Sessions           : 2              perpetual
    Total UC Proxy Sessions           : 2              perpetual
    Botnet Traffic Filter             : Disabled       perpetual
    Intercompany Media Engine         : Disabled       perpetual
    This platform has a Base license.

    Hey Jon,
    Again, many thanks for the info!
    I guess I left that minor detail out concerning the Guest network.  I have a second Netgear router that I am using for Guest netowrk access.  It is plugged in to one of the LAN network ports on the first Netgear router.
    The second Netgear (Guest) router is setup on a different subnet and I am letting the router hand out IP addresses using DHCP.
    Basic setup is the 192.168.1.x is the internal network and 192.168.11.x is the Guest network.  As far as the SBS 2011 server, it knows nothing about the Guest network in terms of the DHCP addresses it hands out.
    Your assumption about the Guest network is correct, I only want to allow guest access to the internet and no access to anything internal.  I like your idea of using the restricted DMZ feature of the ASA for the Guest network.  (I don't know how to do it, but I like it!)  Perhaps you could share more of your knowledge on this?
    One final thing, the (internal) Netgear router setup does provide the option for a separate Guest network, however it all hinges on the router being the DHCP server.  This is what led me to the second (Guest) Netgear router because I wanted the (internal) Netgear router NOT to use DHCP.  Instead I wanted SBS 2011 to be the DHCP server.  That's what led to the idea of a second (Guest) router with DHCP enabled.
    The other factor in all this is SBS 2011.  Not sure what experience you've had with the Small Business Server OS's but they tend to get a little wonky if some of the server roles are disabled.  For instance, this is a small busines with a total of about 20 devices including servers, workstations and printers.  Early on I thought, "nah, I don't need this IPv6 stuff," so I found an article on how to disable it and did so.  The server performance almost immediately took a nose dive.  Rebooting the server went from a 5 minute process to a 20 minute process.  And this was after I followed the steps of an MSDN article on disabling IPv6 on SBS 2011!  Well, long story short, I enabled IPv6 again and the two preceeding issues cleared right up.  So, since SBS 2011 by "default" wants DHCP setup I want to try my best to accomodate it.  So, again, your opinion/experiece related to this is a tremendous help!
    Thanks!

  • Looking for Some Examples / Best Practices on User Profile Customization in RDS 2012 R2

    We're currently running RDS on Windows 2008 R2. We're controlling user's Desktops largely with Group Policy. We're using Folder Redirection to configure their Start Menus as well.
    We've installed a Server 2012 R2 RDS box and all the applications that users will need. Should we follow the same customization steps for 2012 R2 that we used in 2012 R2? I would love to see some articles on someone who has customized a user profile/Desktop
    in 2012 R2 to see what's possible.
    Orange County District Attorney

    Hi Sandy,
    Here are some related articles below for you:
    Easier User Data Management with User Profile Disks in Windows Server 2012
    http://blogs.msdn.com/b/rds/archive/2012/11/13/easier-user-data-management-with-user-profile-disks-in-windows-server-2012.aspx
    User Profile Best Practices
    http://social.technet.microsoft.com/wiki/contents/articles/15871.user-profile-best-practices.aspx
    Since you want to customize user profile, here is another blog for you:
    Customizing Default users profile using CopyProfile
    http://blogs.technet.com/b/askcore/archive/2010/07/28/customizing-default-users-profile-using-copyprofile.aspx
    Best Regards,
    Amy
    Please remember to mark the replies as answers if they help and un-mark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]

  • Best practice for dealing with Recordsets, JDBC and JSP?

    I've spent the last three years developing web apps using JSP, Struts and Kodo JDO for persistence. All of the content for the apps was created as Java objects using model classes and saved to an Oracle db. Thus, data retrieved from the db was as instances of the model classes and then put into Struts form beans, etc.
    I changed jobs last month and am now having to use Servlets with JDBC to retrieve records from db tables and returning it into Recordsets. Oh, and I can't use Struts in my JSPs either. I'm beginning to think that I had it easy at my previous job but maybe that's just because I was used to it.
    So here are my problems/questions:
    I have two tables with a one to many relationship that I need to retrieve data from, show in a jsp and be able to update eventually.
    So here's what I am doing:
    a) In a servlet, I use a SQL statement to join the tables and retrieve the results into a Recordset.
    b) I created a class with a bunch of String attributes to copy the Recordset data into, one Recordset row per each instance of the bean and then close the Recordset
    c) I then add the beans to an ArrayList and save the ArrayList into the session.
    d) Then, in the JSP, I retrieve the ArrayList from the session and iterate over each bean instance, printing the data out to the jsp. There are some logic statements to determine when not to print redundant data caused by the one to many join.
    e) I have not written the code to update the data yet but was planning on having separate jsps for updating the (one) table and the (many) table.
    Would most of you do something similar? Would you use one SQL statement to retrieve all of the data for display and use logic to avoid printing the redundant part of the data? Or would you have used separate SQL queries, one for each table? Would you have saved the results into something other than an instance of a bean class that represents one record in the RecordSet? Would you have had a bean class with attributes other than Strings - like had a collection attribute to hold the results from the "many" table? The way that I am doing everything just seems so cumbersome and difficult compared to using Struts and JDO before.
    Your help/opinion will be greatly appreciated!

    Would you use one SQL statement to retrieve all of the data for display Yes.
    and use logic to avoid printing the redundant part of the dataNo.
    I believe in minimising the number of queries. If it is a simple one-many join on a db table, then one query is better than one + n queries.
    However I prefer to store the objects in a bean class with attributes other than strings - ie one object, with a collection attribute to hold the related "many" records.
    Does the fact you are not using Struts mean that you have to use scriptlet code? (shudder)
    Or are you using JSTL, or other custom tags?
    How about tools like Ant? Junit testing?
    The way that I am doing everything just seems so cumbersome and difficult
    compared to using Struts and JDO before.Anything different takes adjusting to. Sounds like you know what you're doing for the most part. I agree, in terms of best practices what you have described so far sounds like a step backwards from what you were previously doing.
    However I wouldn't go complaining about it too loudly, too quickly. If you're new on the block theres nothing like making a pain of yourself, and complaining how backwards the work they have done is to put your new workmates' backs up
    Look on it as a challenge. Maybe discuss it quietly with a team leader, to see if they understand how much easier/better/less error prone such approaches can be?
    Struts, cumbersome as it can be, definitely has the advantage of pushing you to follow good MVC practice.
    Good luck,
    evnafets

  • Best Practice for External Libraries Shared Libraries and Web Dynrpo

    Two blogs have been written on sharing libraries with Web Dynpro DC, but I would
    like to know the best practice for doing this.
    External libraries seem to work great at compile time, but when deploying there is often an error related to the external library not being a deployed component. 
    Is there a workaround for this besides creating a shared J2EE library which I have been able to get working?  I am not interested in something that works, but really
    what are the best practice for this. What is the best way to  limit the number of jars that need to be kept in a shared library/ext library.  When is sharing ref service/etc a valid approach vs. hunting down the jars in the portal libraries etc and storing in an external library.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

Maybe you are looking for