Role testing in QA environments

Hi Experts,
We use the following procedure to create or modify roles:
1)Change or create the role in DEV system.
2)Generate or create the profile. At this point we create the new profile or generate the existing one, but we donu2019t use  a special convention for profile names, we just accept SAP naming proposal for the profile.
3)Include the role in a transport request
4)Copy the transport request to others clients in the DEV system
5)Transport the request to QA -> PRD system.
So we have two scenarios here:
Scenario A)  We often need a key user to test the role in order to make sure that all authorizations are correct and also to complain a quality process.
Scenario B) Sometimes users require to test a group of transactions in order to decide which ones should be included in a new role or in an existing role. In this point the transactions are new to the user and the transaction documentation is not enough to decideu2026. We should let them test itu2026
Now, the problem is that the DEV system (customizing client) is not a client to test, so it has no data to test.
In scenario A if we create a transport request, transport it to QA and the key user reports an authorization error, we have to create a new transport request in DEV with the corresponding fix and transport it to QA as u201Cversion 2u201D for the role.
In scenario B itu2019s possible to create a test role in DEV and proceed as the same, but when the test is finished, itu2019s necessary to create a new transport request to delete the role. This is pretty ridiculousu2026
Then, my question is:
If the test is going to take place in QA system, is it advisable to create roles in such environment for this purpose and delete them afterwards?? In this case, I should use a special naming convention for profiles to avoid problems (i.e Z_TEST<nr>). Does the role deletion in this scenario works properly? I want to make sure that problems wonu2019t arise with this role and profile creation-deletion.
And I also want to know what procedure you follow in scenarios A and B.
Hope kindly advices.
Cheers,
Diego.

Then with the information gathered from the test, I want to change or create the corresponding REAL role in DEV and use the transport system to move the changes to QA and PRD.
That is what I meant by "parrallel maintenance" - you should not forget to build what did work into the real role and scrap the junkyard afterwards.
And just for a sort of survey.... How do you test the roles??
Ideal is to combine it with functional tests and I automate it to a large extent and generate the roles automatically as well as reorg them again after the changes are transfered to the real role. I work out the optimization of proposals as well and transfers the missing values to SU24 instead of the role if I find that it makes sense.
Cheers,
Julius

Similar Messages

  • Data mismatch in Test and Prod environments

    Hi,
    we have a query in Test and Prod environments.this query is not giving same result both for Test and production. Please have a look and share your thoughts.
    Select D1.C3,D1.C21,D2.C3,D2.C21
    from
    (select
    sum(F.X_SALES_DEDUCTION_ALLOC_AMT)as C3,
    O.Customer_num as C21
    from
    ESA_W_ORG_D O,
    ESA_W_DAY_D D ,
    ESA_W_SALES_INVOICE_LINE_F F
    where
    O.ROW_WID = F.CUSTOMER_WID
    and D.ROW_WID = F.INVOICED_ON_DT_WID
    and D.PER_NAME_FSCL_MNTH = '2012 / 12'
    group by O.Customer_num)D1,
    (select
    sum(F.X_SALES_DEDUCTION_ALLOC_AMT)AS c3,
    O.Customer_num as C21
    from
    Sa.W_ORG_D@STPRD O,
    Sa.W_DAY_D@STPRD D ,
    Sa.W_SALES_INVOICE_LINE_F@STPRD F
    where
    O.ROW_WID = F.CUSTOMER_WID
    and D.ROW_WID = F.INVOICED_ON_DT_WID
    and D.PER_NAME_FSCL_MNTH = '2012 / 12'
    group by O.Customer_num)D2
    where
    D1.C21=D2.C21
    and D1.C3<>D2.C3;I have done the following steps:
    1. created one temporary table and searched for duplicate records because if any duplicates found am planning to delete those records. but didn't find any duplicates. searched for common column values using equi join condition. are there any possibilities in data mismatch apart from this?
    2. this query is taking around 45 minutes to retrieve the output. I want to increase the performance of the query. so created Unique on 5 columns. but still taking the same time.
    so uing ALL_ROW HINT and ran the query but still it's taking the same time.
    so suggest me any thing needs to add to increase the performance?
    appreciate your support.
    Thanks.

    If you can create a temporary database link between the two environments use DBMS_RECTIFIER_DIFF or DBMS_COMPARISON to compare the two tables' contents.
    http://www.morganslibrary.org/reference/pkgs/dbms_comparison.html
    http://www.morganslibrary.org/reference/pkgs/dbms_rectifier_diff.html

  • Using Test Setting file to run web performance tests in different environments

    Hello,
    I have a set of web performance tests that I want to be able to run in different environments.
    Currently I have csv file containing the url of the load balancer of the particular environment I want to run the load test containing the web performance tests in, and to run it in a different environment I just edit this csv file.
    Is it possible to use the test settings file to point the web performance tests at a particular environment?
    I am using VSTS 2012 Ultimate.
    Thanks

    Instead of using the testsettings I suggest using the "Parameterize web servers" command (found via context menu on the web test, or via one of the icons). The left hand column then suggests context parameter names for the parameterised web server
    URLs. It should be possible to use data source entries instead. You may need to wrap the data source accesses in doubled curly braces if editing via the "Parameterize web servers" window.
    Regards
    Adrian

  • Interface and conversion testing of SAP environments with Master Data

    Hi guy's
    Please let me know if some one of you know about SAP Conversion Project. Below you have more description:
    - testing of Interfaces from Legacy Systems
    - testing of conversion programs used in the conversion or transposition of data from legacy systems
    - data cleansing activities associated with conversion
    - identify and populate various SAP environments with Master Data necessary for both conversion and interface testing
    Any detail info in that and what kind of knowledge you need to have in some of project will be useful.
    Thanks in advance
    Adeel

    Hi Yannick,
    I am trying to do the exact same thing. Have you gotten any further on this issue?

  • Testing Multiple Desktop Environments + Increasing Productivity

    Hello,
    I've been using Arch Linux with Gnome for some time now and I'm interested in testing out other desktop environments.  Particularily Xfce and FluxBox.  I'm not really interested in KDE as from past experience I've found it rather bloated.  I do however like how you can run Quanta Plus on it (although I've heard this can be done via Gnome with the KDE files etc).
    I recently used Ubuntu with Gnome and was impressed with its ability to have completely seperate work spaces (seperate icons,wallpapers, taskbar and processes).  I'm interested in methods of doing this and saving/restoring entire workspace sessions.  The idea is to have two completely seperate work spaces, one for personal use and the other for work.
    Is it possible to have multiple DE's installed at the same time on Arch and ability to switch between them easily and also remove them if necessary?
    Basically I'm interested in increasing productivity.
    I'm interested in your thoughts on this.
    Thanks.

    Well if you are developing applications that need testing on multiple Desktop enviroments, then yes installing them would help increasing your productivity.
    But if your developed apps are platform independent, installing multiple DEs - in one way - is nothing but bloat and will get in the way because now you will have to rbr different shortcuts/application names for different DEs. That in my opinion leads to decreased productivity. Separate wallpapers on different virtual desktops and apps like those take up lot of memory and are a waste of resources, IMHO
    If  however, you want to just try out different WMs/DEs to find the right one for you, then yes...as moljac mentioned install them and find out which ones you like.
    and yes its easy to install and remove them.

  • Export Import to Maintain Test and Production Environments

    We have developed an application using Locally built Database Providers in Oracle Portal 9.0.2.6 which is installed to 2 schemas, has 5 different providers, and over 100 Portal major components (Forms, Reports, and Calendars) and over 200 minor components (LOV's and Links). We have used export/import transport sets with some luck, but it is a struggle becuase the import procedures are not very robust. Many things (such as missing LOV's, corrupt components, preexisting versions, etc, etc.) can cause an import to fail. And the cleanup necessary to finally achieve a successful import can be very time-consuming.
    Having a robust import mechanism is very important to our strategy for keeping installed (our own and clients') portal instances up-to-date with our latest release. Some of the enhancements that would make it much easier to develop and maintain Portal applications include:
    Within the Portal:
    1. Ability to copy an entire provider within the same portal (rather than one component at a time).
    2. Ability to change the schema to which a Provider is associated.
    3. When copying a component from one provider to another, the dependent items (i.e. LOVs and Links) should be copied to new second provider as well. (i.e. rather rebuilding each LOV in each provider and then editing each form to point to the new LOVs)
    Transport Sets:
    4. Should allow for changing provider names and provider schema, and global component name changes, and resetting unqiue id's on import (to create copy rather than overwrite).
    5. Should allow the option to ignore errors and import all components which pass pre-check (rather than failing all components if all items do not pass pre-check).
    How are other Portal Developers dealing with installing and then rolling out new sets of Locally built Database Providers from Development environments to Production? Are there any whitepapers on the best practices for replicating/installing a portal application to a new portal instance and then keeping it updated?
    Oracle, are any of my wish-list items above on the future enhancement lists? Or have others figured out workarounds?
    Thanks,
    Trenton

    There are a couple of references which can be found on Portalstudio.oracle.com that are of some use:
    1. A FAQ for Portal 9.0.2.6 Export/Import http://portalstudio.oracle.com/pls/ops/docs/FOLDER/COMMUNITY/OTN_CONTENT/MAINPAGE/DEPLOY_PERFORM/9026_EXPORT_IMPORT_FAQ_0308.HTM
    2. Migration Instructions by Larry Boussard (BRUSARDL)
    3. Migrating Oracle Portal from Dev Systems to Production Systems bt Dheeraj Kataria.
    These are all useful documents for a successful first-time Export-Import. However, the limitations and lack of robustness I listed in my first post, make the process so time-consuming and error fraught as to not be a practical development strategy.

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • Test Agents issues

    Hello, 
    I am working on setting up an environment where we will be running Coded UI tests. I’m planning on triggering the tests from TFS builds.
    I have a test plan with 1 automated test case that doesn’t do much – it’s meant to always succeed
    5 VMware servers with Test Agents – each is used in Desktop Client role – automated test launches browser window and types in user name on a Login page
    If I RDP onto test machine, my Test Run (triggered by TFS build) is executed and succeeds
    I am able to have the Test Run execute and succeed without me logging onto the test machine with the test user – so I know that, at least at some point –
    Controller and agent accounts are in correct security groups
    Firewalls exceptions got created during the configurations of controller & agent
    Screen savers are disabled
    Agent runs as process and is able to interact with the desktop
    Without any changes to the Coded UI test or the environments themselves (as far as I can tell), my TFS test build randomly fails. I’ve seen a number of different errors:
    The unit test adapter failed to connect to the data source or to read the data
    Error calling Initialization method for test class PortalCodedUI.Tests.PageLogInCodedUITest: Automation engine is unable to playback the test because it is not able to interact with the desktop.  This could happen if the computer running the test is
    locked or it’s remote session window is minimized.
    Test method PortalCodedUI.Tests.PageLogInCodedUITest.CodedUITestMethod1 threw exception… Failed to find any control that matched the search condition…
    NOTE: screenshot shows minimized browser window
    Failed to queue tests for test run […] on agent […]: No connection could be made because the target machine actively refused it
    An error occurred while communicating with Agent
    Unable to create instance of class PortalCodedUI.Tests.PageLogInCodedUITest. Error: System.ComponentModel.Win32Exception: Access is denied.
    Another error that shows up in the Test Controller’s log is: Unable to delete temporary files on the following agent(s): vstfs:///LabManagement/TestMachine/1 – please note that the account the controller is running under is an admin on the Test Agent box.
    Usually, restarting the machine and repairing the environment makes the test run and succeed again.
    Given the fact that the errors I’m seeing are all over the place, they usually don’t occur twice in a row – I get a different one with every run – and that no obvious changes are made to the code or the environments themselves, I’m finding it very challenging
    to troubleshoot any of them. I also suspect there may be another reason that causes all of my issues.
    Any suggestions would be greatly appreciated.
    Thank you

    Starain – thank you for your response.
    I just want to reiterate that I have one test case with very simple code and no changes are being made
    to it. I’m also not re-configuring the environments. After a reboot of a test machine, my runs succeed, and then at some point they start failing with one of the listed errors – once that happens, a reboot is needed for another successful build. To answer
    your questions/comments:
    I am using the build template – it’s pulling code from the TFS drop location
    The test user can connect and execute Coded UI tests. It works a couple of times and then just stops and starts throwing errors. There are no screen savers, auto logon is enabled.
    Test method PortalCodedUI.Tests.PageLogInCodedUITest.CodedUITestMethod1 threw exception… Failed to find any control that
    matched the search condition
    This error is thrown when the very same (and only) test is executed as before. My build runs a couple of
    times in the row and the test succeeds. At some point, the test just starts failing with this error. There is a screenshot attached to the test result – it shows that IE was launched but it’s minimized.
    I have enabled logs and I didn’t see anything in there that would point to reasons why these errors just start showing up after a
    few successful runs.

  • FNDLOAD Question -- How to FNDLOAD a set of custom ROLES, not entire UMX?

    Calling all FNDLOADers,
    Question:
    Are there any other options in FNDLOAD that will allow us to pull only those roles that are custom, either based off of UMX_ROLE name, UMX security CODE or any other attribute?
    Background:
    We have created ~40 custom roles in our 11.5.10.2 EBIZ environment, and would like to migrate these roles with FNDLOAD between environments. We are executing the following:
    FNDLOAD <username/pwd@sid> 0 Y DOWNLOAD $FND_TOP/patch/115/import/afrole.lct umx_roles.ldt WF_ROLE ORIG_SYSTEM=UMX%
    This will get us ALL UMX roles in the entire EBIZ system!
    Now, granted, we can go into the umx_roles.ldt file and edit the file, but the possibility for human error comes in..."FAT FINGERS" syndrome!
    Thanks and best regards,
    Gabe D

    Hello Gabe,
    Did a lot with FNDLOAD, iSetup and ACMP in the previous month. I am assuming that you are using naming conventions for your roles, right?
    In this case, the role code shall always start with XX, to specify that this role is a custom role.
    Then, in FNDLOAD you use the parameter ROLE_NAME:
    FNDLOAD <username/pwd@sid> 0 Y DOWNLOAD $FND_TOP/patch/115/import/afrole.lct umx_roles.ldt WF_ROLE ROLE_NAME=UMX|XX%
    I just tested this and it works perfectly, exactly the 15 roles I created are extracted.
    Any follow up question is welcome!
    kr, Volker

  • Role of DBA in DW life cycle

    Hi friends,
    Wouldu pls tell me what are the basic jobs to be done by a dba in a complete dataware life cycle?
    Thanks in advance,
    Pragati.

    I would refer to the various books by Ralph Kimball for more information on this as he covers all the various roles within a data warehouse project. The Oracle Database 2 Day DBA 10g Release 2 (10.2) provides a comprehensive overview of DBA type tasks, most of which apply to any type of application (OLTP or data warehousing).
    In addition DBAs might also be asked to manage design repositories such as those required by Oracle Warehouse Builder and other ETL tools and ensure these are configured and backed up correctly.
    Much will depend on the size of the data warehouse team, the size of the project and the required roles and responsibilities. In some customers where I have worked there have been different DBAs covering development, testing, QA, prooduction environments. At other customers I have seen DBAs cover just about everything including writing deployment scripts. So there is no standard approach in my opinion.
    Hope this helps,
    Keith
    Product Management
    Oracle Warehouse Builder

  • Testing general help

    Hi Gurus,
    I am soon going to start testing on an SAP upgrade project. Whats the best way to get myself prepared for the same? I am currently studying the client's Buiness processes, however there are just too many docs and these are getting me confused. Also I am concerned that all this time studying the docs is not wasted. If you have been in a similar situation before, please let me know how do you go about it. testing will be Functional + integration.

    hi dave,
    pls see the below matter i think it gives you a solution.
    SAP R/3
    Security Upgrades
                                                                                    1.             overview
    The purpose of this document is to provide additional information that could be helpful with SAP Security upgrades, especially pertaining to 4.6C.
    This document is not aimed at replacing the SAP Authorizations Made Easy guidebook’s procedures, but rather to complement these based on lessons learnt from previous upgrade projects. 
    It is focused mainly on upgrades from 3.1x to 4.6x and covers the following:
    ·        Evaluation of the Security Upgrade approaches;
    ·        “Gotchas” to watch out for with SAP’s SU25 utility;
    ·        Transactions and authorizations that require special attention; and
    ·        Helpful reports, transactions, hints and tables to know.
    It is highly recommended that you review the chapter on upgrades in the Authorizations Made Easy guide before attempting the security upgrade.
    See OSS note 39267 for information on obtaining the Guide, or visit SAPLabs’ website at: http://wwwtech.saplabs.com/guidebooks/
    2.             Security upgrade objectives, Process and approaches
    2.1.               Objectives
    There are a couple of objectives for having to upgrade the SAP Security infrastructure:
    ·   Converting manual profiles created via SU02 to activity groups, as SAP recommends the use of Profile Generator (PFCG) for the maintenance of profiles;
    ·   Adding new transactions representing additional functionality to the applicable activity groups;
    ·   Adding the replacement transactions that aim at substituting obsolete or old-version transactions, including the new Enjoy transactions;
    ·   Adjusting the new authorization objects that SAP added for the new release; and
    ·   Ensuring that all existing reports, transactions and authorizations still function as expected in the new release of SAP.
    2.2.               Overview of the Security upgrade process
    Once the Development system has been upgraded to 4.6, the security team will need to perform the following steps as part of the Security Upgrade:
    ·        Convert Report Trees to Area Menus;
    ·        Review users (via SU01) to check for any new or changed fields on the user masters;
    ·        Convert manual profiles created via SU02 to Activity Groups (See Approaches below);
    ·        Compare SU24 customer settings  to new SAP default settings (SU25 steps 2A-2C);
    ·        Determine which new / replacement transactions have to be added to which activity groups (SU25 step 2D);
    ·        Transport the newly-filled tables USOBT_C and USOBX_C that contain the SU24 settings you’ve made (SU25 step 3); and
    ·        Remove user assignments to the manual profiles.
    2.3.               Approaches to convert manual profiles to Activity Groups:
    2.3.1.      Approach #1: SAP’s standard utility SU25
    SAP provides an utility for converting Manual Profiles to Activity Groups and to identify the new and replacement transactions that need to be added to each activity group.
    You can access this utility by typing “SU25” in the command box.
    If you do decide to use SU25 Step 6 to convert the Manual profiles to activity groups, you will need to watch out for the following “gotchas”:
    Naming convention (T_500yyyyy_previous name)
    All activity groups created before SU25 is run, are renamed to T_500yyyyy_previous name. 
    See OSS note 156196 for additional information and procedures to rename the activity groups back to their original names using program ZPRGN_COPY_T_RY_ARGS.  Carefully review information regarding the loss of links between profiles and user master records.
    Transaction Ranges
    Ranges of transactions are not always added correctly to the newly-created activity groups. Some of the transactions in the middle of the range are occasionally left off.  E.g. you have a transaction range of VA01 – VA04 for a specific manual profile.  After SU25 conversion, the new Activity Group only contains VA01 and VA04.  Transactions VA02 and VA03 were not added.
    It is important that a complete download of table UST12 is done prior to running SU25.  Once SU25 has been run, a new download of UST12 can be done to identify which transactions have been dropped off.
    The missing transaction codes will need to be added manually to the relevant activity group via PFCG.
    Missed “new” transactions
    The output of one of the steps in SU25 is a list of the new replacement transactions (e.g. Enjoy transactions) that need to be added per activity group.  E.g. transaction ME21N replaces ME21.  The list will identify each activity group that has ME21 where ME21N needs to be added to.
    In some cases SU25 does not identify all new transactions to be added.
    2.3.2.      Approach #2: Manual reconstruction of Profiles as Roles (Activity Groups)
    An alternative approach to SU25 is to manually create an activity group for each manual profile that was created via SU02.
    The advantage of this approach is that you won’t have any missing transactions that were “dropped off” with the SU25 conversion.  
    3.      Items requiring special attention
    3.1.   Authorizations
    Several new authorization objects have been added with release 4.6. Care should be taken when adjusting authorizations – carefully review all new defaults that were brought in. These are indicated by a Yellow or Red traffic light in PFCG.
    It is highly recommended that you first check the previous settings where new defaults were brought in, before just accepting the new defaults.  You can either use the existing 3.1x Production system or the UST12 and/or USOBT_C tables as reference.
    3.2.   ‘*’ in S_TCODE
    It’s recommended that all activity groups containing an ‘*’ in authorization object s_tcode are recreated via PFCG by selecting only those transactions required for that role.  Also, if you did previously add transactions to an activity group by manipulating the s_tcode authorization entries, it is recommended that the transactions are pertinently selected/added on the Menu tab. The object s_tcode should be returned to its ‘Standard’ status.
    3.3.   Report Trees
    Report Trees need to be converted to Area Menus using transaction RTTREE_MIGRATION..
    3.4.   ABAP Query reports
    Reports created by ABAP Query need to be added either to the activity group (Menu tab) or to an Area menu to ensure an authorization check on s_tcode level.
    3.5.   S_RFC
    The use of an authorization object for Remote Function Calls (RFC) was introduced to provide authorization checks for BAPI calls, etc. Authorization object s_rfc provides access based on the Function Group (each RFC belongs to a Function Group). Due to the potential prevalent use of RFC’s within the R/3 system, SAP has provided the ability to change the checks for this object via parameter auth/rfc_authority_check. It is possible to deactivate the checking of this object completely. However it is recommended to rather set the values as required, which makes testing even more important! 
    3.6.   Custom tables and views
    Custom views and tables that are customarily maintained via SM30, SM31,etc. will need to be added to an authorization group.  This can be done via transaction SE54 or SUCU or by maintaining table TDDAT via SM31.
    3.7.   User menus versus SAP menu
    A decision needs to be made once the first system has been upgrade to 4.6x as to whether the user menus or the SAP menu, or both are to be used.
    Most users find the new user menus confusing and unfamiliar due to duplication of transactions, etc. (if a user has more than one activity group and the same transaction appears in several, the transaction will appear multiple times). The majority of upgrades from my experience have opted to use a modified copy of the SAP menu by adding their own area menus (converted report trees).
    3.8.   Re-linking of user master records to profiles
    If you do not maintain the user masters in the same client as the activity groups, you will need to establish a strategy for re-linking the users in the QA and Productive environments when transporting the activity groups as part of the upgrade cutover. This might also be necessary depending on whether you decided to rename the Activity groups per OSS note 156196.
    Remember to thoroughly test and document all procedures and CATT scripts prior to the Production cutover.
    3.9.   Dual-maintenance
    With most current upgrades, the upgrade process will be tested on a separate environment set aside from the existing landscape. In a lot of cases a dual-landscape will be implemented where the existing landscape is complemented with an additional 4.6x test client(s).   The new 4.6x clients usually become part of the permanent landscape once the Production system has been cut over and all changes are then sourced from these ‘new’ Development and/or QA systems.
    It is imperative that all interim security-related changes are applied to both sets of systems to ensure that the ‘new’ 4.6x development source system is current with all changes that were made as part of Production support in the ‘old’ version landscape.  If not, you will have changes that were taken to Production when it was still on the older release, but are now missing after the switch is made to the 4.6x systems.
    It is thus advisable to keep changes during the upgrade project to a minimum.
    3.10. Transport of activity groups
    Changes to activity groups are not automatically recorded in 4.6x. When an activity group needs to be transported, it needs to be explicitly assigned to a change request via PFCG.
    SAP recommends that you first complete all the changes to an activity group, before you assign it to a transport request.   Once you’ve assigned the activity group to a request, do not make any further changes to it.
    You can also do a mass transport of activity groups via PFCG > Environment > Mass Transport.
    If you want to transport the deletion of an activity group, you first have to assign the activity group to a transport request before performing the deletion via PFCG.
    3.11. Client copies
    The profiles used for creating client copies have been changed, especially profile SAP_USER from 4.5 onwards. Activity groups are seen as customizing and the SAP_USER profile copies both user masters and activity groups.
    It’s recommended that the client copy profiles are carefully reviewed before the copy is performed.
    See OSS note 24853 for additional information on client copies.
    3.12. SU24
    Changes to check indicators that were made via SU24 might have to be redone as part of the upgrade.  Ensure that any resulting transport requests are noted and included in the detailed cutover plan.
    Check indicator changes done via SU24 will need to be applied for any new and replacement transactions.
    3.13. Composite Activity Groups
    Composite activity groups can be built in release 4.6x using individual activity groups.  A composite activity group does not contain any authorizations, but is merely a collection of individual activity groups.
    3.14. Central User Administration
    Central User Administration (CUA) simplifies user administration, allowing security administrators to maintain users in a single central client only.  The user masters are then distributed to other clients using ALE.  It is recommended that CUA is implemented post-upgrade and once the systems have been stabilized.  Carefully review OSS notes and the impact on the existing landscape, client copy procedures, etc. prior to implementing CUA.  It is recommended that the upgrade is kept as simple as possible – there are going to be plenty of opportunities to test your problem-solving skills without complicating the setup with new utilities!
    See Authorizations Made Easy guide for information on setting up CUA.
    See OSS notes 333441 and 159885 for additional information.
    4.      additional tips
    4.1.               OSS and Release Notes
    Review all security-related OSS and Release notes related to upgrades and to the release you’ll be upgrading to, prior to the upgrade.  It’s useful to review these before you define your workplan, in case you have to cater for any unforeseen issues or changes.
    4.2.               Workplan
    Given the amount of work and number of steps involved in the security upgrade, it is recommended that a detailed Workplan is defined at the startup of the upgrade project.  Key milestones from the security workplan should be integrated and tracked as part of the overall Upgrade Plan.
    Clear ownership of activities, including conversion of Report Trees, needs to be established.  This function is often perform by the Development team.
    4.3.               Standards and Procedures
    Naming conventions and standard procedures should be established before the manual profiles are reconstructed as activity groups.  Each team member should know how the new activity groups should be named to ensure consistency. Other standard practices for the construction of the activity groups should include:
    ·        Transactions are added via the Menu tab and not by manipulating s_tcode.
    ·        Ideally, no end users should have access to SE38, SA38, SE16 nor SE17. 
    Remember to keep Internal Audit involved where decisions need to be made regarding the segregation of job functions or changes to current authorizations are requested or brought in with new authorization objects / defaults.
    4.4.               Testing
    4.4.1.      Resources for testing
    Enough resources should be allocated to the security upgrade process as each activity group and profile will require work to some degree or the other.  It is important that key users and functional resources are involved in testing the activity groups and that this effort is catered for in the Upgrade Project plan.  Clear ownership of each activity group should be established not only for testing purposes, but also for ongoing support and approval of changes.  Ideally, the ownership and approval of changes should reside with different resources (i.e. the person requesting the addition of a transaction or authorization should not be the same person responsible for approving the request).
    4.4.2.      Test Plan
    The security team should also establish testing objectives (whether each transaction being used in Production should be tested, whether each activity group should be tested with a representative ID, etc.). 
    A detailed test plan should then be established based on the approach, to ensure each person responsible for testing knows what s/he should be testing, what the objective(s) of the test is and how to report the status of each test.  Both positive (user can do his/her job functions) and negative (user can’t perform any unauthorized functions) testing should be performed.
    The Reverse Business Engineering (RBE) tool is very useful in identifying which transactions are actually being using in Production.  This can assist with focusing on which transactions to test.
    The importance of testing all used transactions individually and as part of role-testing cannot be stressed enough.  TEST,TEST,TEST!
    Every menu option, button, icon and available functions for all critical transactions need to be checked and tested.  There are some instances where icons are grayed out or don’t even appear for certain users, due to limited authorizations.  The only way these type of issues can be identified, is through thorough testing.
    4.5.               Issue Management (tracking and resolution)
    Due to the number of users potentially impacted by issues / changes to a single activity group, a perception can quickly be created that the security upgrade was unsuccessful or the cause of many post GoLive issues.
    It is therefore recommended that an issues log is established to track and ensure resolution of issues.  The log should ideally also contain a description of the resolution, to aid with similar problems on other activity groups. 
    This log will be helpful during the entire upgrade process, especially where more than one resource is working the same set of activity groups, so set it up at the beginning of upgrade project!  You can also use this for a ‘lessons learnt’ document for the next upgrade.
    4.6.               Status reporting
    The security upgrade forms an integral part of the overall upgrade given the sensitivity and frustration security issues could cause.  It is important that key milestones for the security upgrade are tracked and reported on to ensure a smooth and on-time cutover.
    4.7.               Detailed cutover plan
    The detailed cutover plan differs from the overall security workplan, in that the detailed plan outlines the exact steps to be taken during each system’s upgrade itself.  This should include:
    ·        Transport request numbers,
    ·        Download of security tables prior to the upgrade, especially UST12, USOBT_C and USOBX_C,
    ·        A backup and restore plan, (e.g. temporary group of activity groups for critical functions),
    ·        The relinking of user master records, with details on any CATT scripts, etc. that might be used,
    ·        User comparison, etc. 
    The security team needs to ensure that enough time is allocated for each action item and that this time is built into the overall cutover plan.   The project manager is usually expected to give an indication to end users and key stakeholders as to when the Productive system will be unavailable during its cutover to the new release.  This downtime should thus incorporate time required to perform user master comparisons, unlocking of ID’s and all other action items.
    4.8.               Project team access
    The SAP_NEW profile can temporarily be assigned to project team members to provide interim access to the new authorization objects. This provides the security team the opportunity to convert and adjust the IS team’s activity groups.  It also eliminates frustration on the functional team’s side when configuring and testing new transactions, etc.
    4.9.               Training and new functionality
    Some support team members (e.g. Help Desk members responsible for reset of user passwords, etc.) might require training and/or documentation on the changed screens of SU01, etc.
    It is recommended that a basic Navigation & Settings training module is created for all SAP users and should cover the use of Favorites, etc.
    The security team should also review Profile Generator in detail, as several new functions have been added (e.g. download/upload of activity groups, etc.).  Remember to review all the different icons, menu options and settings on the authorizations tab, etc.
    Lastly, if your company / project does use HR as related to security (activity groups and users assigned to positions / jobs), ensure that you become acquainted with the new enjoy transactions, e.g. PPOMW.
    4.10.           SU53
    A new function with SU53 is the ability to display another user’s SU53 results.   (Click on the ‘other user’ button and enter the person’s SAP ID).
    4.11.           Post Go-live
    Remember to establish a support roster, including after hours for critical batch processes, to ensure security-related issues are resolved in a timely fashion.
    Dumps should be checked regularly (Objects s_rfc and s_c_funct like making appearances in dumps) for any authorizations-related issues.  Transaction ST22 can be used to review dumps for that day and the previous day.
    Avoid transporting activity groups at peak times, as the generation of activity groups can cause momentarily loss of authorizations.  It’s recommended that a roster for activity group transport and mass user comparison be reviewed with the project manager prior to the upgrade.  Exceptions should be handled on an individual basis and the potential impact identified, based on number and type of users, batch jobs in progress, etc. 
    And, don’t forget to keep on tracking all issues and documenting the resolutions for future reference.
    5.      helpful reports, transactions and tables
    5.1.               Reports and Programs
    ·           RTTREE_MIGRATION: Conversion of Report Trees to Area Menus
    ·           PFCG_TIME_DEPENDENCY: user master comparison (background)
    ·           RSUSR* reports (use SE38 and do a possible-values list for RSUSR* to see all available security reports), including:
    v     RSUSR002 – display users according to complex search criteria
    v     RSUSR010 – Transactions that can be executed by users, with Profile or Authorization
    v     RSUSR070 – Activity groups by complex search criteria
    v     RSUSR100 – Changes made to user masters
    v     RSUSR101 – Changes made to Profiles
    v     RSUSR102 – Changes made to Authorizations
    v     RSUSR200 – Users according to logon date and password change, locked users.
    5.2.               Transactions
    ·           SUIM : various handy reports
    ·           SU10 : Mass user changes
    ·           PFCG: Profile Generator
    ·           PFUD: User master comparison
    ·           SU01: User master maintenance
    ·           ST01: System trace
    ·           ST22: ABAP dumps
    ·           SUCU / SE54: Maintain authorization groups for tables / views
    ·           PPOMW: Enjoy transaction to maintain the HR organizational plan
    ·           PO10: Expert maintenance of Organizational Units and related relationships
    ·           PO13: Expert maintenance of Positions and related relationships
    ·           STAT: System statistics, including which tcodes are being used by which users
    5.3.               Tables
    Table
    Use
    UST12
    Authorizations and Tcodes per Profile
    UST04
    Assignment of users to Profiles
    AGR_USERS
    Assignment of roles to users
    USOBT_C
    Authorizations associated with a transaction
    USR02
    Last logon date, locked ID’s
    AGR_TCODES
    Assignment of roles to Tcodes (4.6 tcodes)
    USH02
    Change history for users (e.g. who last changed users via SU01)
    USH04
    Display history of who made changes to which User Ids
    USR40
    Non-permitted passwords
    i am also providing the url of sap  upgrade guide. pls check it out ok.
    www.thespot4sap.com/upgrade_guide_v2.pdf
    reward me points if it helps you
    thanks
    karthik

  • Loof -i output showing webcache on prod EBS iSupplier (but not in dev/test)

    Hi,
    I inherited an Linux EBS install (12.0.6) with iSupplier on an external web server. The web server is front-ended by a reverse proxy. Using lsof -i to look at the ports I noticed a difference between my Test and Production environments.
    In production the lsof output contains a number of the following entries:
    httpd     7062     applp25     18u     IPv4     521062911     TCP     *:webcache     (LISTEN)
    httpd     7062     applp25     32u     IPv4     546736321     TCP     pwk-sv-wb10.foo.bar :webcache->pwk-ap-ap3.foo.bar:11940     (ESTABLISHED)
    applp25 is the applmgr account and pwk-ap-ap3 is the reverse proxy.
    In my test environment I do Not see anything related to webcache and when I do see communication back to the proxy it’s always been on the Web port
    e.g.
    httpd 13149 applt25 32u IPv4 547891775 TCP pwk-sv-wbt10.foo.bar:8040->pwk-ap-ap3.foo.bar:11963 (ESTABLISHED)
    I don’t do anything different starting/stopping processes in Prod, Test or Dev e.g. adstrtal.sh.
    While I understand that Oracle has Web caching technologies I don’t have any experience with them and am not quite sure when to begin looking. Obviously I plan on comparing the context files, etc., just hoping for some suggestions on where else to look and references I should look at.
    Thanks in Advance
    ken

    Solved....for some reason /etc/services as distributed in Redhat (others ?) has port 8080 associated with webcache

  • Displaying multiple roles related to a single key value in a single view MVC 4

    Dear Team,
             My name is Ajay Sutar. I am newly learning MVC 4, as we are about to initiate a new project in ASP .NET MVC 4. I am using a details scaffolding to display a  single
    record. but now my single employee have multiple roles. As example,
     Emp_no    |    Role                     | Salary 
     E1             | Software Engineer  | 10000
     E1             | Tester                     | 10000 
    as i have used Details scaffold and FirstOrDefault method of linq for this view, I am unable to display the second role "Tester" in the  output. only first role is geeting displayed.
    What i want is :
    Emp_no : E1                 Role: Software Engineer                  Salary:10000
                                                  Tester 

    Hello,
    Welcome to MSDN forum.
    I am afraid that the issue is out of support range of VS General Question forum which mainly discusses
    the usage of Visual Studio IDE such as WPF & SL designer, Visual Studio Guidance Automation Toolkit, Developer Documentation and Help System
    and Visual Studio Editor.
    Because you are working with ASP.NET Web Application, I suggest that you can consult your issue on ASP.NET forum:
    http://forums.asp.net/
     for better solution and support.
    Best regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • BI authorization objects not appearing in RAR, error while generating role

    Hi
    I am facing certain problems relating to integration of BI module version 7 with GRC Access Controls version 5.3 and support package 06. I am describing the problems in details below:
    (a)  In Risk Analysis and Remediation (RAR) component, I am creating Functions and
          Risks for Business Intelligence (BI) module. For that I have downloaded the
          descriptive text and authorization object data from BI development system and
          uploaded the same in RAR. Then I have created 2 Function Ids DBI1 (having action
          RSA1) and DBI2 (having actions RSA11, RSA12, RSA13, RSA14, RSA15) and 1
          Risk Id for BI (having Function Ids DBI1 and DBI2) in RAR. But when I checked
          the permission tabs of the Function Ids DBI1 and DBI2, I could not find any
          authorization objects for the actions in them.
    (b)  In Enterprise Role Management (ERM), when I am trying to create a Role TEST-BI
           in DBI 100 and I put the  BI transaction codes in authorization data , I get the
           authorization objects . Risk analysis is also being done successfully. But at the time
           of Role generation in background mode , it is giving an error message :
           Error generating role TEST-BI for system DBI 100: Unable to interpret * as a number.
           I am thus unable to generate any role in DBI 100.
    (c)  In Compliance User Provisioning (CUP), I have imported a standard role from DBI
          100. Then I have added Functional Area, Business Process, Subprocess  and
          Criticality Level to this role in CUP. But when I try to assign this Role to an user, it
           gives an error Error creating request. But requests are getting created and roles are
           being assigned to users in ECC development  systems using the same Initiator, CAD, stage
           and path.
    Can anyone please help me ?

    -

  • Invalid Security role-name error in Web Project

    Hi All,
    I have imported a J2EE application project built in JBOSS into NWDS 7.1.
    While building the project i get the following error
    <b>CHKJ3020E:Invalid Security role-name error: PEHNTAHO_ADMIN</b>
    This error directs me to the following code in web.xml
    <security-constraint>
              <display-name>Default JSP Security Constraints</display-name>
              <web-resource-collection>
                   <web-resource-name>Portlet Directory</web-resource-name>
                   <url-pattern>/jsp/*</url-pattern>
                   <http-method>GET</http-method>
                   <http-method>POST</http-method>
              </web-resource-collection>
              <auth-constraint>
                   <b><role-name>PEHNTAHO_ADMIN</role-name></b>
              </auth-constraint>
              <user-data-constraint>
                   <transport-guarantee>NONE</transport-guarantee>
              </user-data-constraint>
         </security-constraint>
    <b>I have tried out the following things to resolve this issue :</b>
    <b>1) Remove the role manually</b>(as suggested by various people in other J2EE forums), but then some other error came in to picture
    <b>2)Then I added the following code in web.xml</b>
    <security-role>
              <role-name>PEHNTAHO_ADMIN</role-name>
         </security-role>
    Then the above mentioned build error gets resolved, but then I get the following error while deploying the application.
    Dec 3, 2007 12:59:21 AM /userOut/daView_category (eclipse.UserOutLocation) [Thread[Deploy Thread,5,main]] ERROR: Deploy Exception.An error occurred while deploying the deployment item 'sap.com_AnalyticsApp2EAR'.; nested exception is:
         java.rmi.RemoteException:  class com.sap.engine.services.dc.gd.DeliveryException: An error occurred during deployment of sdu id: sap.com_AnalyticsApp2EAR
    sdu file path: D:\usr\sap\CE1\J01\j2ee\cluster\server0\temp\tcbldeploy_controller\archives\191\AnalyticsApp2EAR.ear
    version status: HIGHER
    deployment status: Admitted
    description:
              1. Error:
    Cannot update application sap.com/AnalyticsApp2EAR. Reason: The application sap.com/AnalyticsApp2EAR will not be update, because its validation failed. Reason:
    ERRORS:
    Web Model Builder: com.sap.engine.frame.core.configuration.NameNotFoundException: The parameter/s in String "<?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd">
    <web-app>
         <!-- whole web.xml-->
    </web-app>
    " is/are not defined and could not be substituted., file: AnalyticsApp2.war#WEB-INF/web.xml, column 0, line 0, severity: error
    WARNINGS:
    Web Model Builder: Following tests could not be executed because of failed precondition test "Web Model Builder" : Implicit Constraints Test, JSF Application Test, Mapping Test, Web File Existence Test, Web Class Existence Test, Security Role Test, file: AnalyticsApp2.war, column -1, line -1, severity: warning
    <b>3) I had also added the following code in web-j2ee-engine.xml</b>
    <security-role-map>
              <role-name>PEHNTAHO_ADMIN</role-name>
              <server-role-name>all</server-role-name>
         </security-role-map>
    but still i get the same deployment error.
    Please help me in resolving this problem.
    Can anybody tell me the use of role "PEHNTAHO_ADMIN"?
    Thanks and Regards,
    Sruti

    Hi Malathy,
    Once the users are created in Authentication Provider, and once the roles are created in Weblogic Server, You just have to map users to roles in Jazn-data.xml.
    Could you please let us know you created a roles named users in WLS ?
    Thanks & Regards,
    Murali.
    ============

Maybe you are looking for

  • How can I reinitaliz​e Voice Dial.

    I recently update my BlackBerry Pearl's software to the 4.2 version.  Long story short I had to wipe the device and then reactivate my BES/Excahnge account.  As I teach people how to use BlackBerries, I am always trying different things.  I switched

  • Transfering from old HDD to new HDD

    I have a MacBook with the standard 80GB HDD and I purchased a new 160GB HDD. My question is, how do transfer everything I have from my 80GB to my new 160GB including prefs, settings, and files. I didn't want to reinstall all my software and reset all

  • What all information will i loose if I sync iPhone with another computer?

    Hey, Just wanted to understand what all information will i loose if i sync my iphone with another computer? will i loose my contacts and messages? Thanks Cheers Divesh

  • Cant access quicktime control panel

    I may be stupid or somethin, but I can't find the **** quicktime control panel to upgrade to the pro version. I looked through my computer's control panel, there is only one user on it, but it looks like there is also an old administrator user there

  • Delimiter_based E-Text in Bi Publiser 11.1.1.5.0

    hi, I'm in the process of creating delimiter_based e-text in bi publisher 11.1.1.5.0 and came across the scenario to set the tab as the delimiter. Can someone send link or any information on how to set the tab as delimiter in e-text template. thanks