Best practice for batch id access

I'm not sure if this is the right place to post this so here it goes.
I have a client who wants to have a super user create background jobs. These jobs will be assigned to a specific generic id (ex. BATCH_FI). The client only wants to give FI-CO access to this id. They are convinced that this is the way to do it in 2011. I know the other clients I support have a role that is pretty much a copy of SAP_ALL. This client does not want this unless I can prove to them that SAP experts recommend using a SAP_ALL type role for background users to eliminate the possibilty of jobs failing. If it comes from me, they want nothing to do with it. However if it comes from SAP experts, they'll listen.
Can anyone help me here? What can I do?
Cheers!!!
Randy

Hi,
I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
How does bpel know where to look for the xsd-files and how does the mapping still work?
This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
Thanks for any clue.
Bette

Similar Messages

  • Best practice for how to access a set of wsdl and xsd files

    I've recently beeing poking around with the Oracle ESB, which requires a bunch of wsdl and xsd files from HOME/bpel/system/xmllib. What is the best practice for including these files in a BPEL project? It seems like a bad idea to copy all these files into every project that uses the ESB, especially if there are quite a few consumers of the bus. Is there a way I can reference this directory from the project so that the files can just stay in a common place for all the projects that use them?
    Bret

    Hi,
    I created a project (JDeveloper) with local xsd-files and tried to delete and recreate them in the structure pane with references to a version on the application server. After reopening the project I deployed it successfully to the bpel server. The process is working fine, but in the structure pane there is no information about any of the xsds anymore and the payload in the variables there is an exception (problem building schema).
    How does bpel know where to look for the xsd-files and how does the mapping still work?
    This cannot be the way to do it correctly. Do I have a chance to rework an existing project or do I have to rebuild it from scratch in order to have all the references right?
    Thanks for any clue.
    Bette

  • Best practice for having an access point giving out only a specific range

    Hey All,
    I have an access point which is currently set to relay all dhcp request to the server DC-01, However the range that has been setup is becoming low on available IP addresses so I have been asked if it is possible to setup another range for the AP only.
    Is there a way to set the DHCP up with a new range and say anything from that access point it will then give out a 192.168.2 subnet address as apposed to the standard 192.168.1 subnet?
    Or would it be easier / better to create a superscope? and slowly migrate the users to a new subnet with a larger range?
    Any help suggestions would be appreciated
    thanks
    Anthony

    Hi,
    Maybe we could configure a DHCP superscope to achieve your target.
    For details, please refer to the following articles.
    Configuring a DHCP Superscope
    http://technet.microsoft.com/en-us/library/dd759168.aspx
    Create a superscope to solve the problem of dwindling IP addresses
    http://www.techrepublic.com/article/create-a-superscope-to-solve-the-problem-of-dwindling-ip-addresses/
    Best Regards,
    Andy Qi
    Andy Qi
    TechNet Community Support

  • Best Practices for Batch Updates, Inserts and Complex Queries

    The approach we have taken for our ALDSP Architecture is to model or DASi as Business Data Objects, each DS joining several (some times many) tables and lookups. This works ok when needing to access individual records for read and update, but when we need to update multiple tables and rows within the same commit, trying to do this with a logical single ds built on tables or other dASi, proves both cumbersome and slow. This is also the case for queries, when we have complex where clauses within a DS built upon two or more multi-table-joined logical DASi.
    We tried a DS built on SQL, but that does not allow dml operations. We may have to just use JDBC. Any thoughts on how best to leverage DAS in this respect.

    I tried doing this by creating a UO class and using it on a DS built on a sql statement. What we wanted to do here is first read the DS to get a list of ID values that met the conditions of the query and then call submit() and have the UO update all the necessary tables associated with those IDS.
    However, we found that U/O never get's called unless you actually update something, not just send submit() after a read. Dis I misunderstand the way this shoudl work?

  • Best Practice for Developer update access to database in Production

    I am curious to find out what other organizations are doing as to developer access to sysadm in production. Such as using a database account created like sysadm that can be checked out for use and locked when not in use? or ?

    Developer can be provided with Read only access to SYSADM schema.
    Thanks
    Soundappan
    Edited by: Soundappan on Dec 26, 2011 12:00 PM

  • Best practices for batch processing without SSIS

    Hi,
    The gist of this question is in general how should a C# client insert/update batches records using stored procedures. The ideas I can think of are:
    1) create 1 SP with a parameter of type XML, and pass say 100 records at a time, on 1 thread.  The SP reads the XML as a table and does a single INSERT.
    2) create 1 SP with many parameters, that inserts 1 records.  I can either build a big block of EXEC statements for say 100 records at a time, or call the SP 1 and a time, on 1 thread.  Obviously this seems the slowest.
    3) Parallel processing version of either of the above: Pass 100 records at a time via XML parameter, big block of EXEC statements, or 1 at a time, and use PLINQ to make multiple connections to the database.
    The records will be fairly wide, substantial records.
    Which scenario is likely to be fastest and avoid lock contention?
    (We are doing batch processing and there is not a current SSIS infrastructure, so it's manual: fetch data, call web services, update batches.  I need a batch strategy that doesn't involve SSIS - yet).
    Thanks.

    The "streaming" option you mention in your linked thread sounds interesting, is that a way to input millions of rows at once?  Are they not staged into the tempdb?
    The entire TVP is stored in tempdb before the query/proc is executed.  The advantage of the streaming method is that it eliminates the need to load the entire TVP into memory on either the client or server.  The rowset is streamed to the server
    and SQL Server uses the insert bulk method is to store it in tempdb.  Below is an example C# console app that streams 10M rows as a TVP.
    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Collections;
    using System.Collections.Generic;
    using Microsoft.SqlServer.Server;
    namespace ConsoleApplication1
    class Program
    static string connectionString = @"Data Source=.;Initial Catalog=MyDatabase;Integrated Security=SSPI;";
    static void Main(string[] args)
    using(var connection = new SqlConnection(connectionString))
    using(var command = new SqlCommand("dbo.usp_tvp_test", connection))
    command.Parameters.Add("@tvp", SqlDbType.Structured).Value = new Class1();
    command.CommandType = CommandType.StoredProcedure;
    connection.Open();
    command.ExecuteNonQuery();
    connection.Close();
    class Class1 : IEnumerable<SqlDataRecord>
    private SqlMetaData[] metaData = new SqlMetaData[1] { new SqlMetaData("col1", System.Data.SqlDbType.Int) };
    public IEnumerator<SqlDataRecord> GetEnumerator()
    for (int i = 0; i < 10000000; ++i)
    var record = new SqlDataRecord(metaData);
    record.SetInt32(0, i);
    yield return record;
    IEnumerator IEnumerable.GetEnumerator()
    throw new NotImplementedException();
    Dan Guzman, SQL Server MVP, http://www.dbdelta.com

  • Best practice for Wireless ap vlan

    Is there a best practice for grouping lightweight access points in one vlan or allowing them to be spread across several ??

    Whether you have multiple sites or not, it's good practice to put your APs in a separate and dedicated VLAN. 
    If your sites are routed sites, then you can re-use the same VLAN numbers but make sure they are on separate subnets and/or VRF instance.

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practices for Accessing the Configuration data Modelled as XML File in

    Hi,
    I refer the couple of blof posts/Forum threads on How to model and access the Configuration data as XML inside OSB.
    One of the easiest and way is to
    Re: OSB: What is best practice for reading configuration information
    Another could be
    Uploading XML data as .xq file (Creating .xq file copy paste all the Configuration as XML )
    I need expert answers for following.
    1] I have .xsd file which is representing the Configuration data. Structure of XSD is
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue</Config>
    <FrameworkConfig>
    2] As my project will move from one env to another the property-value will change according to the Environment...
    For Dev:
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Dev</Config>
    <FrameworkConfig>
    For Stage :
    <FrameworkConfig>
    <Config type="common" key="someKey">proprtyvalue_Stage</Config>
    <FrameworkConfig>
    3] Let say I create the following Folder structure to store the Configuration file specific for dev/stage/prod instance
    OSB Project Folder
    |
    |---Dev
    |
    |--Dev_Config_file.xml
    |
    |---Stage
    |
    |--Stahe_Config_file.xml
    |
    |---Prod
    |
    |-Prod_Config_file.xml
    4] I need a way to load these property file as xml element/variable inside OSb message flow.?? I can't use XPath function fn:doc("URL") coz I don't know exact path of XMl on deployed server.
    5] Also I need to lookup/model the value which will specify the current server type(Dev/Stage/prod) on which OSB MF is running. Let say any construct which will act as a Global configuration and can be acccessible inside the OSb message flow. If I get the vaalue for the Global variable as Dev means I will load the xml config file under the Dev Directory @runtime containing key value pair for Dev environment.
    6] This Re: OSB: What is best practice for reading configuration information
    suggest the designing of the web application which will serve the xml file over the http protocol and getting the contents into variable (which in turn can be used in OSB message flow). Can we address this problem without creating the extra Project and adding the Dependencies? I read configuration file approach too..but the sample configuration file doesn't show entry of .xml file as resources
    Hope I am clear...I really appreciate your comments and suggestion..
    Sushil
    Edited by: Sushil Deshpande on Jan 24, 2011 10:56 AM

    If you can enforce some sort of naming convention for the transport endpoint for this proxy service across the environments, where the environment name is part of the endpoint you may able to retrieve it from $inbound in the message pipeline.
    eg. http://osb_host/service/prod/service1 ==> Prod and http://osb_host/service/prod/service2 ==> stage , then i think $inbound/ctx:transport/ctx:uri can give you /service/prod/service1 or /service/stage/service1 and applying appropriate xpath functions you will be able to extract the environment name.
    Chk this link for details on $inbound/ctx:transport : http://download.oracle.com/docs/cd/E13159_01/osb/docs10gr3/userguide/context.html#wp1080822

  • Best Practices for Data Access

    Good morning!
    I was wondering if someone might give me some advice on some best practices for retrieving data from a SQL server in the cloud via a desktop application?
    I'm curious if I embed into my desktop application the server address (IP, or Domain or whatever) and allow the users to provide their own usernames and passwords when using the application, if there was anything "wrong" with that? Where-in my
    application collects the username and password from the user, connects to a server with that username and password, retrieves the data and uses it in-app.
    I'm petrified of security issues and I would hate to start using a SQL database with this setup only to find out that anyone could download x, y or z and connect to the database and see everything.
    Assuming I secure all of the users with limited permissions, is there anything wrong with exposing a SQL server to the web for my application to use? If so, what and what would be a reasonable alternative?
    I really appreciate any help and feedback!

    There are two options, none of them very palatable:
    1) One is to create a domain, and add the VM and your local box to it.
    2) Stick to a workgroup, but have the same user name and password on both machines.
    In practice, a better option is to create an SQL login that is member of sysadmin - or who have rights to impersonate an account that is member of sysadmin. And for that matter, you could use the built-in sa account - but you rename it to something else.
    The other day I was looking at the error log from a server that apparently had been exposed on the net. The log was full with failed login attempts for sa, with occasional attempts for names like usera and so on. The server is in Sweden - the IP address
    for the login attempts were in China.
    Just so know what you can expect.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practices for ARM - please help!!!

    Hi all,
    Can you please help with any pointers / links to documents describing best practices for "who should be creating" the GRC request in below workflow of ARM in GRC 10.0??
    Create GRC request -> role approver -> risk manager -> security team
    options are : end user / Manager / Functional super users / security team.
    End user and manager not possible- we can not train so many people. Functional team is refusing since its a lot of work. Please help me with pointers to any best practices documents.
    Thanks!!!!

    In this case, I recommend proposing that the department managers create GRC Access Requests.  In order for the managers to comprehend the new process, you should create a separate "Role Catalog" that describes what abilities each role enables.  This Role Catalog needs to be taught to the department Managers, and they need to fully understand what tcodes and abilities are inside of each role.  From your workflow design, it looks like Role Owners should be brought into these workshops.
    You might consider a Role Catalog that the manager could filter on and make selections from.  For example, an AP manager could select "Accounts Payable" roles, and then choose from a smaller list of AP-related roles.  You could map business functions or tasks to specific technical roles.  The design flaw here, of course, is the way your technical roles have been designed.
    The point being, GRC AC 10 is not business-user friendly, so using an intuitive "Role Catalog" really helps the managers understand which technical roles they should be selecting in GRC ARs.  They can use this catalog to spit out a list of technical role names that they can then search for within the GRC Access Request.
    At all costs, avoid having end-users create ARs.  They usually select the wrong access, and the process then becomes very long and drawn out because the role owners or security stages need to mix and match the access after the fact.  You should choose a Requestor who has the highest chance of requesting the correct access.  This is usually the user's Manager, but you need to propose this solution in a way that won't scare off the manager - at the end of the day, they do NOT want to take on more work.
    If you are using SAP HR, then you can attempt HR Triggers for New User Access Requests, which automatically fill out and submit the GRC AR upon a specific HR action (New Hire, or Termination).  I do not recommend going down this path, however.  It is very confusing, time consuming, and difficult to integrate properly.
    Good luck!
    -Ken

  • Best Practices For Household IOS's/Apple IDs

    Greetings:
    I've been searching support for best practices for sharing primarily apps, music and video among multple iOS's/Apple IDs.  If there is a specific article please point me to it.
    Here is my situation: 
    We currently have 3 iPads (2-kids, 1-dad) in the household and one iTunes account on a win computer.  I previously had all iPads on single Apple ID/credit card and controlled the kids' downloads thru the Apple ID password that I kept secret.  As the kids have grown older, I found myself constantly entering my password as the kids increased there interest in music/apps/video.  I like this approach because all content was shared...I dislike because I was constantly asked to input password for all downloads.
    So, I recently set up an individual account for them with the allowance feature at iTunes that allows them to download content on their own (I set restrictions on their iPads).  Now I have 3 Apple IDs under one household.
    My questions:
    With the 3 Apple IDs, what is the best way to share apps,music, videos among myself and the kids?  Is it multiple accounts on the computer and some sort of sharing? 
    Thanks in advance...

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • BEST PRACTICES FOR CREATING DISCOVERER DATABASE CONNECTION -PUBLIC VS. PRIV

    I have enabled SSO for Discoverer. So when you browse to http://host:port/discoverer/viewer you get prompted for your SSO
    username/password. I have enabled users to create their own private
    connections. I log in as portal and created a private connection. I then from
    Oracle Portal create a portlet and add a discoverer worksheet using the private
    connection that I created as the portal user. This works fine...users access
    the portal they can see the worksheet. When they click the analyze link, the
    users are prompted to enter a password for the private connection. The
    following message is displayed:
    The item you are requesting requires you to enter a password. This could occur because this is a private connection or
    because the public connection password was invalid. Please enter the correct
    password now to continue.
    I originally created a public connection...and then follow the same steps from Oracle portal to create the portlet and display the
    worksheet. Worksheet is displayed properly from Portal, when users click the
    analyze link they are taken to Discoverer Viewer without having to enter a
    password. The problem with this is that when a user browses to
    http://host:port/discoverer/viewer they enter their SSO information and then
    any user with an SSO account can see the public connection...very insecure!
    When private connections are used, no connection information is displayed to
    SSO users when logging into Discoverer Viewer.
    For the very first step, when editing the Worksheet portlet from Portal, I enter the following for Database
    Connections:
    Publisher: I choose either the private or public connection that I created
    Users Logged In: Display same data to all users using connection (Publisher's Connection)
    Users Not Logged In: Do no display data
    My question is what are the best practices for creating Discoverer Database
    Connections.
    Is there a way to create a public connection, but not display it in at http://host:port/discoverer/viewer?
    Can I restrict access to http://host:port/discoverer/viewer to specific SSO users?
    So overall, I want roughly 40 users to have access to my Portal Page Group. I then want to
    display portlets with Discoverer worksheets. Certain worksheets I want to have
    the ability to display the analyze link. When the SSO user clicks on this they
    will be taken to Discoverer Viewer and prompted for no logon information. All
    SSO users will see the same data...there is no need to restrict access based on
    SSO username...1 database user will be set up in either the public or private
    connection.

    You can make it happen by creating a private connection for 40 users by capi script and when creating portlet select 2nd option in Users Logged in section. In this the portlet uses there own private connection every time user logs in.
    So that it won't ask for password.
    Another thing is there is an option of entering password or not in ASC in discoverer section, if your version 10.1.2.2. Let me know if you need more information
    thnaks
    kiran

Maybe you are looking for