Creation of PD Profiles directly within Production environment.
The company I work for has recently upgraded all SAP environments, including where HR resides. Prior to the upgrade, we were able to create PD profiles directly within PRD, but now we are no longer since upgrading via OOSP which updates some of the T77* tables.
Does anyone know what type of config setting changes are required to allow us to return to being able to config PD profiles with PRD again?
Thanks in advance!
HIMASON
I had the same problem and this is the solution I was given.
To make the OOSP table maintainable in production, in the production system go into the IMG path: Personnel Management> Personnel Development > Basic Settings > Authorization Management > Structural Authorizations the position cursor on the text Maintain Structural Authorization Profiles
Then go to Edit on the menu then Display IMG Activity.
Click on the tab Maint.Objects
Double click on the T77PQ
The Current Settings box needs to be ticked.
But first it looks like the system has to be set to modifiable (this is in table T000) OR do it in DEV and transport it through, - you need a developers key to do this.
Similar Messages
-
Change Doc for Auth Changes to Profiles directly in Production
We use CUA and Profile Generator.
How can I prove that Roles and Authorizations in the profiles assigned to roles are not being edited directly in production (they are transported in).
When I use SUIM, for Roles, For Authorizations, or for any option under "For Roles"....I don't see any indication that the change came from the dev or assurance systems (nor any indication it was changed directly in production).
If I choose the "for authorization" view, it appears to find only a few temporary roles which could have been created and/or maintained directly in production. How can I be assured?
Our Transport logs from the dev system were a bit corrupted by a Client "refresh" copy action so I can't look at those to see if the date of change of the role coincides with a transport in E070 or E071 tables.
thank you for any thoughts!
PS Table CD1251 is not populated in our system.>
David Berry wrote:
> Hi Bernhard
>
> Should this be allowed at all in prod? If an ungenerated role (or even one without a profile) makes it to production should security be generating? SUPC warns about generating in a productive client and there is the chance that a T-P* profile is created accidentally.
I would allow it, especially for emergency cases (maybe not for all admins). Nowadays with auth/new_buffering=4 it should not be so dangerous anymore to generate in production....
b.rgds, Bernhard
Edited by: Bernhard Hochreiter on Nov 17, 2010 3:47 PM -
Use of Emigall for creation of masters in the production environment
Hi,
The EMIGALL objects are normally used for migrating legacy master data and/or cut-over data before production environment is up.
I am contemplating to use EMIGALL object for creation of contract account masters in production environment. One more option that I have is to use standard BAPI for creating contract account masters.
Can anyone tell me whether it is proper to use EMIGALL object for day to day creation of master day in production environment. Is there any disadvantage or risk involved in it.
Kindly reply soon.
Regards,
GaneshI've already used emigall multiple times to do delta migrations into an operational prod environment.
Purely looking at the functionalities, it should/would/must be possible to use emigall as a master data generator. I just think you need to look into the requirements:
who will use it? end-user/application manager/...
what's the amount of data to be loaded?
what's the time window of the load? Day/night
how is the data supplied?
As you know, emigall EATS system resources like mad, so using it during the day might not be preferable. emigall is also very picky about the file format, whereas in a custom report you can define the input structure yourself.
On the other hand, the error handling and follow-up of emigall is great...
Personally, I'd go for a custom report with a BAPI... It'll give you more flexibility than emigall. -
Directly change in production environment.....
Hi Experts,
Hope you are well.
Could you please share that which changes can we make directly in production environment at the configuration level, without create transport request.
Please share with me the changes list in production environment.
Thanking for your understanding.
Thanks & Regards
RajeshHello,
As per best practices, any configuration change should go through transport process and all master data changes should be done directly in production.
Some examples of changes that can be done directly are.
1. Exchange rates
2. Sets used in validation/substitution (GS01/GS02)
3. All condition records
4. Customer/vendor/GL/Asset/cost center/cost element master data
5. Tax percentages
There are many more follows... Tell us your exact requirement. Then we can guide you better.
Thanks,
V V -
BIN creation directly in Production Server
Dear all,
Can we create BIN directly in Production Server without creating in Golden Server.
I mean does it will work if we directly create in production server ?
Please suggest your view.
Regards,
Rockystorage bins are master data and not customizing, hence they are created in production system directly.
if you create them in development system, then you do not get any transport request.
(but it is possible to create a transport request manually and add the LAGP table entries to it)
but usually bins do not have individual names, a warehouse has a certain structure, and bins get identified by coordinates.
you can customize a schema for the naming and generate the bins.
this schema can be transported.
And then you just execute LS05 to generate the bins -
Issue in production environment
Hi,
We got an issue in the production environment on last Friday and yet not able to find out the root cause of the issue.
Environment
GUI : .Net
Server : Java
Application Server: Jboss
Database: Sqlserver
The Java application is deployed into Jboss application server which connects to SQL server as back end. GUI makes the web service call to server for data communication
Issue
Friday at 2 PM users reported the slowness of the application (Not getting the response from server in GUI), all the request from GUI were getting timed out.
Restart of Jboss didn't help
Restarted the Jboss and sqlserver for the second time and than the environment became stable
Analysis
1. From the thread dump of Jboss log we see that there are many threads waiting on a socket for database connection(according to database team all the connections are open and available at that particular time).
2. The size of transaction log is almost doubled during this period (when the issue was reported).
We couldn't find a reason why this issue is happened. Is it a database issue or something else? Please suggest...
Thanks,
ManojHi Manoj,
According to your description, when running the web application, many threads are waiting for the socket from database which cause requests from front GUI time out. Right?
In this scenario, user can access the front GUI, which means the application server is working. Since the connections are all open at that particular time, it should not be issue on your JDBC. It seems to be a deadlock issue which cause other threads
waiting and hang. I recommend check and optimize your code. On database side, I suggest you open the SQL profiler, select DeadLock Graph, it will record when deadlock occurs. Please refer to links below:
Detecting and Ending Deadlocks
Analyze Deadlocks with SQL Server Profiler
If you have any question, please feel free to ask.
Simon Hou
TechNet Community Support -
Create a new portlet in Deployed Portal in production environment
are there any ways to import or create a new portlet in Deployed Portal in production environment?
Edited by: user8322798 on May 1, 2011 7:26 AMThis can done via WSRP proxy portlets and streaming desktops. First, you'll need to have a WSRP producer setup somewhere. This could be another WLP webapp with portlets, or another server altogether. Or, you can use the JSR 286 WSRP import tool from within the Portal Administration Console (I think it's under Services | WSRP | Import Tool) -- this will allow you upload .war(s) of JSR 168 or 286 portlets, which will be turned into WSRP producer(s).
Then, you can use the Portal Administration Console to register a WSRP Producer, and then add portlets from the producer to your desktop (http://download.oracle.com/docs/cd/E15919_01/wlp.1032/e14235/chap_fed_books_pages.htm#FPGWP690). Additionally, once the producer has been registered in Portal Adminstration Console, an adminstrator user can use the Dynamic Visitor Tools from within the streaming desktop itself to add wsrp proxy portlets to the desktop (http://download.oracle.com/docs/cd/E15919_01/wlp.1032/e14243/dvt.htm#PDGWP691).
It is not possible to add new local .portlet files to a deployed application in production mode. This requires adding the file artifacts to the .ear/.war and redeploy the application.
Greg -
View remarks of an approver directly within the "Messages / Alerts" window.
Hi,
Here is a need from some customers. It makes snes to ask for this functionality... It works from one side (requestor side) but not for the other (approver side)... Could this enhancement be implemented in future patches / versions ?
Thanks,
Luce
Version: 2005-SP1-PL36
Description of requirements (Please provide a detailed description) :
Be able to view the remarks of an approver directly within the u201CMessages / alertsu201D overview. Identical behavior as it is for the approver when he receives a request for approval.
Valid as of Date that this legal requirement is applicable) :
Business needs (Please describe the impact on your business, if the functionality is not realized):
With no view of the remarks coming from the approver, we can miss important information from the approver. .. And to have to open a u201Cdecision report u201C in order to view the remarks that the approver has written, that makes no sense and in addition, Itu2019s time consuming !
Examples (Please describe a typical example, how the functionality should work):
Sophie wants to create a sales order, but she needs an approval to do it. So, when pressing the u201CAddu201D button in the Sales order window, the system launches the approval process. She writes in the REMARKS field u201CNeed your approval ASAP pleaseu201D.
Bill will be the approver of the sales order, so he receives the request for document approval in his u201CMessages / Alerts overviewu201D window.
Bill clicks on the u201CRequest for document approvalu201D in the u201CMessages / Alerts Overviewu201D window. By doing that, he sees in the middle part of the u201CMessages / Alerts Overviewu201D window that Sophie wrote a remark as this :`u201CNeed your approval ASAP pleaseu201D. By knowing this, Bill will open the approval window and will take his decision immediately.
He writes a remark to Sophie as this : u201CPlease, donu2019t create any sales order for this customer before next week because we donu2019t know if they will pay us soon enoughu201D, he selects u201CApproveu201D and sends his approval to Sophie.
Sophie can see that Bill has approved her sales order in the u201CMessages / Alerts Overviewu201D window.
By clicking on the u201CDocument generation approvedu201D, Sophie can see very quickly, in the middle part of the u201CMessages / Alerts Overviewu201D window, that Bill gave her the instruction to avoid sales order creation for this customer in the upcoming week. So she wonu2019t try to create any sales order anymore for this partner.
Current Workaround (Please describe the workarounds you are using at the moment):
Open the u201CApproval decision reportu201D and filter to find the appropriate document approval, in order to view the remarks of the approver. This is a big time consumingu2026
Proposed solution (Please suggest how the new functionality should work):
Itu2019s simple !! Why couldnu2019t you just simply standardize the way things work and do as it works for the approver side ? The approver can see / read the remarks coming from the requestor in the u201CMessages / Alerts overviewu201D window ! It would be so easy to do the same thing on requestor side ! It would be nice to be able to see / read the remarks coming from the approver in the u201CMessages / Alerts overviewu201D window !I don't know if someone will take care of this... And I have to close a question since I want to post a new thread... So I have no choice, I must close one of my questions...
-
Java.lang.NullPointerException in MQ adapter in Production Environment
Hi,
My Process like Send the request to ResultsAAA or ResultsBBB (MQ queue) and Dequeue the msg req from the queue(ResultsAAA OR ResultsBBB) basing on the request request goes to either ResultsAAA or ResultsBBB and executing the bpel and we are configured Error Queue for no data.When I test it I got the following exexption in production environment,please provide any sugestion/directions.
Exception
[2012-02-10T09:58:33.168-05:00] [Soa_server1] [ERROR] [] [oracle.soa.adapter] [tid: orabpel.invoke.pool-4.thread-13]
[userId: <anonymous>] [ecid: fda340adc9569001:4b5fb511:13546d7766d:-8000-0000000000cb7
1aa,0:1:102971537] [APP: soa-infra] [composite_name: ResultsAAA] [component_instance_id: 102715847] [component_name:
ResultsAAA] MQ Series Adapter ResultsBBB:PostBBB
LToTrends [ Dequeue_ptt::Dequeue(body) ] Error retrieving NXSD encoding...
[2012-02-10T09:58:33.173-05:00] [Soa_server1] [ERROR] [] [oracle.soa.adapter] [tid: orabpel.invoke.pool-4.thread-36]
[userId: <anonymous>] [ecid: fda340adc9569001:4b5fb511:13546d7766d:-8000-0000000000cb7
1d4,0:1:102971543] [APP: soa-infra] [composite_name: ResultsAAA] [component_instance_id: 102715855] [component_name:
ResultsAAA] MQ Series Adapter ResultsBBB:PostBBB
LToTrends [ Dequeue_ptt::Dequeue(body) ] Error retrieving NXSD encoding...
[2012-02-10T09:58:33.185-05:00] [Soa_server1] [ERROR] [] [oracle.soa.adapter] [tid: orabpel.invoke.pool-4.thread-13]
[userId: <anonymous>] [ecid: fda340adc9569001:4b5fb511:13546d7766d:-8000-0000000000cb7
1aa,0:1:102971537] [APP: soa-infra] [composite_name: ResultsAAA] [component_instance_id: 102715847] [component_name:
ResultsAAA] MQ Series Adapter ResultsBBB:PostBBB
LToTrends [ Dequeue_ptt::Dequeue(body) ] [[
java.lang.NullPointerException
at oracle.tip.adapter.mq.outbound.MessageProducer.getEncodingFromNXSD(MessageProducer.java:402)
at oracle.tip.adapter.mq.outbound.MessageProducer.updateMessageEncodingFromNXSD(MessageProducer.java:427)
at oracle.tip.adapter.mq.outbound.MessageProducer.produce(MessageProducer.java:363)
at oracle.tip.adapter.mq.outbound.InteractionImpl.execute(InteractionImpl.java:168)
at oracle.integration.platform.blocks.adapter.fw.jca.cci.JCAInteractionInvoker.executeJcaInteraction
(JCAInteractionInvoker.java:311)
at oracle.integration.platform.blocks.adapter.fw.jca.cci.JCAInteractionInvoker.invokeJcaReference
(JCAInteractionInvoker.java:525)
at oracle.integration.platform.blocks.adapter.fw.jca.cci.JCAInteractionInvoker.invokeAsyncJcaReference
(JCAInteractionInvoker.java:508)
at oracle.integration.platform.blocks.adapter.fw.jca.cci.JCAEndpointInteraction.performAsynchronousInteraction
(JCAEndpointInteraction.java:491)
at oracle.integration.platform.blocks.adapter.AdapterReference.post(AdapterReference.java:231)
at oracle.integration.platform.blocks.mesh.AsynchronousMessageHandler.doPost(AsynchronousMessageHandler.java:142)
at oracle.integration.platform.blocks.mesh.MessageRouter.post(MessageRouter.java:194)
at oracle.integration.platform.blocks.mesh.MeshImpl.post(MeshImpl.java:215)
at sun.reflect.GeneratedMethodAccessor1672.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307)
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint
(ReflectiveMethodInvocation.java:182)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:148)
at oracle.integration.platform.metrics.PhaseEventAspect.invoke(PhaseEventAspect.java:71)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy303.post(Unknown Source)
at oracle.fabric.CubeServiceEngine.postToMesh(CubeServiceEngine.java:806)
at com.collaxa.cube.ws.WSInvocationManager.invoke(WSInvocationManager.java:258)
at com.collaxa.cube.engine.ext.common.InvokeHandler.__invoke(InvokeHandler.java:1056)
at com.collaxa.cube.engine.ext.common.InvokeHandler.handleNormalInvoke(InvokeHandler.java:583)
at com.collaxa.cube.engine.ext.common.InvokeHandler.handle(InvokeHandler.java:130)
at com.collaxa.cube.engine.ext.bpel.common.wmp.BPELInvokeWMP.__executeStatements(BPELInvokeWMP.java:74)
at com.collaxa.cube.engine.ext.bpel.common.wmp.BaseBPELActivityWMP.perform(BaseBPELActivityWMP.java:158)
at com.collaxa.cube.engine.CubeEngine._performActivity(CubeEngine.java:2463)
at com.collaxa.cube.engine.CubeEngine.performActivity(CubeEngine.java:2334)
at com.collaxa.cube.engine.CubeEngine.handleWorkItem(CubeEngine.java:1115)
at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handleLocal
(PerformMessageHandler.java:73)
at com.collaxa.cube.engine.dispatch.DispatchHelper.handleLocalMessage(DispatchHelper.java:220)
at com.collaxa.cube.engine.dispatch.DispatchHelper.sendMemory(DispatchHelper.java:328)
at com.collaxa.cube.engine.CubeEngine.endRequest(CubeEngine.java:4350)
at com.collaxa.cube.engine.CubeEngine.endRequest(CubeEngine.java:4281)
at com.collaxa.cube.engine.CubeEngine.createAndInvoke(CubeEngine.java:679)
at com.collaxa.cube.engine.delivery.DeliveryService.handleInvoke(DeliveryService.java:654)
at com.collaxa.cube.engine.ejb.impl.CubeDeliveryBean.handleInvoke(CubeDeliveryBean.java:293)
at sun.reflect.GeneratedMethodAccessor1609.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint
(ReflectiveMethodInvocation.java:182)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed
(ReflectiveMethodInvocation.java:148)
at com.bea.core.repackaged.springframework.jee.intercept.MethodInvocationInvocationContext.proceed
(MethodInvocationInvocationContext.java:104)
at oracle.security.jps.ee.ejb.JpsAbsInterceptor$1.run(JpsAbsInterceptor.java:94)
at java.security.AccessController.doPrivileged(AccessController.java:284)
at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:413)
at oracle.security.jps.ee.ejb.JpsAbsInterceptor.runJaasMode(JpsAbsInterceptor.java:81)
at oracle.security.jps.ee.ejb.JpsAbsInterceptor.intercept(JpsAbsInterceptor.java:89)
at oracle.security.jps.ee.ejb.JpsInterceptor.intercept(JpsInterceptor.java:105)
at sun.reflect.GeneratedMethodAccessor1588.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at com.bea.core.repackaged.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:310)
at com.bea.core.repackaged.springframework.jee.intercept.JeeInterceptorInterceptor.invoke
(JeeInterceptorInterceptor.java:69)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed
(ReflectiveMethodInvocation.java:171)
at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed
(DelegatingIntroductionInterceptor.java:131)
at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke
(DelegatingIntroductionInterceptor.java:102)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed
(ReflectiveMethodInvocation.java:171)
at com.bea.core.repackaged.springframework.jee.spi.MethodInvocationVisitorImpl.visit
(MethodInvocationVisitorImpl.java:37)
at weblogic.ejb.container.injection.EnvironmentInterceptorCallbackImpl.callback
(EnvironmentInterceptorCallbackImpl.java:54)
at com.bea.core.repackaged.springframework.jee.spi.EnvironmentInterceptor.invoke(EnvironmentInterceptor.java:50)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed
(ReflectiveMethodInvocation.java:171)
at com.bea.core.repackaged.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke
(ExposeInvocationInterceptor.java:89)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed
(ReflectiveMethodInvocation.java:171)
at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed
(DelegatingIntroductionInterceptor.java:131)
at com.bea.core.repackaged.springframework.aop.support.DelegatingIntroductionInterceptor.invoke
(DelegatingIntroductionInterceptor.java:102)
at com.bea.core.repackaged.springframework.aop.framework.ReflectiveMethodInvocation.proceed
(ReflectiveMethodInvocation.java:171)
at com.bea.core.repackaged.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204)
at $Proxy290.handleInvoke(Unknown Source)
at com.collaxa.cube.engine.ejb.impl.bpel.BPELDeliveryBean_5k948i_ICubeDeliveryLocalBeanImpl.__WL_invoke(Unknown
Source)
at weblogic.ejb.container.internal.SessionLocalMethodInvoker.invoke(SessionLocalMethodInvoker.java:39)
at com.collaxa.cube.engine.ejb.impl.bpel.BPELDeliveryBean_5k948i_ICubeDeliveryLocalBeanImpl.handleInvoke(Unknown
Source)
at com.collaxa.cube.engine.dispatch.message.invoke.InvokeInstanceMessageHandler.handle
(InvokeInstanceMessageHandler.java:35)
at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:140)
at com.collaxa.cube.engine.dispatch.BaseDispatchTask.process(BaseDispatchTask.java:88)
at com.collaxa.cube.engine.dispatch.BaseDispatchTask.run(BaseDispatchTask.java:64)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:897)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:919)
at java.lang.Thread.run(Thread.java:736)
[2012-02-10T09:58:33.185-05:00] [Soa_server1] [ERROR] [] [oracle.soa.adapter] [tid: orabpel.invoke.pool-4.thread-36]
[userId: <anonymous>] [ecid: fda340adc9569001:4b5fb511:13546d7766d:-8000-0000000000cb7
1d4,0:1:102971543] [APP: soa-infra] [composite_name: ResultsAAA] [component_instance_id: 102715855] [component_name:
ResultsAAA] MQ Series Adapter ResultsBBB:PostBBB
LToTrends [ Dequeue_ptt::Dequeue(body) ] [[
java.lang.NullPointerException
Thanks
Manihi paul,
Thanks for reply,Actuly in my Input schema I include another data type and data type base schema.
Can I add that encoding on the xml document declaration for each schema or it is sufficient only for
main Input schema.
Thanks
Mani -
Couldn't do a particular function only in production environment
Hi All,
I am not able to do a particular function in production environment, so I took SU53 screen shot and found the missing authorization object and added the needed role in quality and tested the same, it was working. But when I tried the same in production with that particular role I couldn't do it.
Can you help me locate where the issue is.A novice asked the Master: ``Here is a programmer that never designs, documents or tests his programs. Yet all who know him consider him one of the best programmers in the world. Why is this?''
The Master replies: ``That programmer has mastered the Tao. He has gone beyond the need for design; he does not become angry when the system crashes, but accepts the universe without concern. He has gone beyond the need for documentation; he no longer cares if anyone else sees his code. He has gone beyond the need for testing; each of his programs are perfect within themselves, serene and elegant, their purpose self-evident. Truly, he has entered the mystery of Tao.''
May the Tao be with you.
P.S. --> Asking Good Questions in the SCN Discussion Spaces will help you get Good Answers -
MSDN and Production Environment (again)
I started this on another forum before I found this one, but this seems a more suitable place.
The definition of "production environment" seems rather odd. In some responses on this forum it appears to refer to "soft" systems whereas the latest MSDN licence refers to environment and physical kit.
Below is a conversation I had over email with someone from MSDN and I find the whole thing utterly bizarre. I cannot for the life of me see how this helps anyone apart from MS being able to charge for non-production software. It renders having a powerful desktop
for local lab experimentation pointless as you're not allowed to install anything and effectively doubles the hardware cost to small companies if they have to buy a separate server for any testing work (yes, best practice and all that, but budgets...) or pay
out for a Windows Datacenter licence.
Question:
“If a physical machine running one or more virtual machines is used entirely for development and test, then the operating system used on the physical host system can be MSDN software. However, if the physical machine or any of the VMs hosted on that physical
system are used for other purposes, then both the operating system within the VM and the operating system for the physical host must be licensed separately.”
Is this actually saying that if I have a physical server licenced with a purchased (not MSDN) Server 2012 R2, running Hyper-V with, say, a production file server VM on it, that ALL Windows VMs on that machine must have purchased licences even if they
are only for development & testing purposes?
Is this saying that all production and development Windows VMs must be only completely separate hardware, cluster, SAN, etc otherwise you must pay for full licences for the VMs?
Or does it just mean that the bare metal licence (plus any additional ones required for running further production VMs) must be purchased if the VMs are a mix of production and development?
Answer:
We kindly inform that any products licensed under the developer tools model (e.g. SQL/BizTalk developer and/or MSDN) must be installed on their own separate physical hardware.
You are not allowed to run test or development products on a server where production workloads are running at the same time. Kindly run your developer software on a device/host that is dedicated to testing and development.
Explanation:
The Product Use Rights (PUR) say that the developer software is not licensed for use in a production environment. Even if the PUR does not have a separate definition of production environment, a production environment is a set of resources for network, physically
dedicated hardware and software to provide "live" service. If the intent was to say that the same physical server could be used for both development and production - it would say "not licensed for use in a production OSE," instead
it says environment.
See current PUR, page 51:
Developer Tools (User License)
You have the rights below for each license you acquire.
# You must assign each license to a single user.
# Each Licensed User may run an unlimited number of copies of the Developer Tools software and any prior version on any device.
# The Licensed User may use the software for evaluation and to design, develop, test, and demonstrate your programs. These rights include the use of the software to simulate an end user environment to diagnose issues related to your programs.
# The software is not licensed for use in a production environment. # Additional rights provided in license terms included with the software are additive to these product use rights, provided that there is no conflict
with these product use rights, except for superseding use terms outlined below.
Question:
Classifying an entire physical infrastructure as "production" in these days of virtualisation and shared storage really does not make any sense at all. Not using the software for production purposes makes perfect sense, but not being able to locate
it alongside production OS installs is mad. Does this only apply to the server running the VM (CPU and RAM)? If the VHDX is hosted on shared SAN storage does the SAN have to be dedicated to non-production storage?
Answer:
We kindly inform that after double-checking the case we would like to confirm the development software cannot be run on the same hardware with production software.
We have also received a feedback from the responsible team regarding your request about a dedicated SAN (Storage Area Network) for MSDN software.
They have confirmed that the SAN has to be dedicated to the development and testing environment if it is used to run the software acquired through MSDN.
Question:
OK, so if I have my desktop (which is a production environment as I use it for email and other day to day office tasks), can I turn on Hyper-V and install an MSDN Windows Server 2012 instance for development purposes?
Answer:
We kindly inform it is not allowed to install and run software from MSDN subscriptions in production environments. Please do not install MSDN software on a desktop in a production environment:
"[.] The customer will need to run the developer software on a device/host that is dedicated to testing and development.
Explanation:
The Product Use Rights (PUR) say that the developer software is not licensed for use in a production environment. Even if the PUR does not have a separate definition of production environment, a production environment is a set of resources for network, physically
dedicated hardware and software to provide "live" service. If the intent was to say that the same physical server could be used for both development and production - it would say "not licensed for use in a production OSE," instead
it says environment.
See current PUR, page 51:
Developer Tools (User License)
You have the rights below for each license you acquire.
- You must assign each license to a single user.
- Each Licensed User may run an unlimited number of copies of the Developer Tools software and any prior version on any device.
- The Licensed User may use the software for evaluation and to design, develop, test, and demonstrate your programs. These rights include the use of the software to simulate an end user environment to diagnose issues related to your programs.
- The software is not licensed for use in a production environment.
- Additional rights provided in license terms included with the software are additive to these product use rights, provided that there is no conflict with these product use rights, except for superseding use terms outlined below.Hi Mike,
It sucks that MSDN software can't be run in a production environment, that means you have to have two entirely separate hardware environments, which are costly, and it seems unnecessary.
That's essentially it. I'm not saying for one second that it should be used for production purposes, just that it's physical location shouldn't be relevant. Also, the word "environment" is a very bad choice in the documentation simply because it's very open
to interpretation.
A production environment is defined as an environment that is accessed by end users of an application (such as an Internet Web site) and that is used for more than
Acceptance Testing of that application
or Feedback. Some scenarios that constitute production
environments include:
Environments that connect to a production database.
Environments that support disaster-recovery or backup for a production environment.
Environments that are used for production at least some of the time, such a server that is rotated into production during peak periods of activity.
So I dont think (here's that inconclusive language) but am not sure that your desktop machines count as production environments, based on that, unless end users are connecting to them. (I dearly hope they are not!)
My reading is based on the "Other Guidance" section:
"If a physical machine running one or more virtual machines is used entirely for development and test, then the operating system used on the physical host system can be MSDN software. However, if the physical machine or any of the VMs
hosted on that physical system are used for other purposes, then both the operating system within the VM and the operating system for the physical host must be licensed separately."
<o:p>This is the crux of the matter and the interpretation of "licensed separately". A (to my mind) sensible reading of that would be "if you're running any production purpose VMs on a server then the physical host OS must be a full licence
[presuming it's Server 2012 and not, say, VMWare or Hyper-V 2012] as must all production purpose VMs on that server". This has been getting interpreted by others (I'm not the first) and backed up by MS as meaning that if you want to run any dev/test VMs on
a server that also runs production VMs then you can't use MSDN for those dev/test VMs.</o:p>
Also, there is a section
here, on the MSDN Licensing help page that says (with my added emphasis):
Many
MSDN subscribers use a computer for mixed use—both design, development, testing, and demonstration of your programs (the use allowed under the MSDN subscription license) and some other use. Using the software in any other way, such as for doing email,
playing games, or editing a document is another use and is not covered by the MSDN subscription license.
When this happens, the underlying operating system must also be licensed normally by purchasing a regular copy of Windows such as the one that came with a new OEM PC.
Now to me, it seems this might be saying that the underlying operating system on a work
machine cannot be licensed using MSDN if that work machine is going to be doing non-msdn things in addition to MSDN things. It doesn't say "This can't happen" it just says "When this happens, the underlying
OS must be licensed normally..."
So, based on what I'm reading it seems that this quote from you might not be true:
"We can't install a local MSDN instance of Server 2012 or 8.1 for dev and test under Hyper-V on desktops
because desktops used for email, writing documents, etc are production. "
I wouldn't have expected this to be true either, but this is the response I was given. It may well be
that my question was misunderstood. I hope this is the case otherwise one of the big reasons for turning on Hyper-V on expensive, powerful desktops enabling the running of personal test environments goes out the window!
Thanks for your time on this. -
Is construction of webi directly in production a best practice?
with bex-query and universes well consolidated and tested by a IT group,
can be considered the construction of webis directly in production without going through test and quality a Business Objects best practice?
is possible allow end-users (non IT personal) construct this webis?
Is there a document of good practices that SAP made this recommendation?.
thanks in advance by the answer.
Ramón MedieroIf universe and all has been tested and signed-off; also end user are familiar with Webi report development and they want their ad-hoc reports instead of pre-developed report set; There will be no issue to allowing end user to develop the Webi reports in production. However there we have to take care of few points like
> need to check where report creation in production's public folder is feasible or not ? If yes how? is we need to create separate folders for individual user or what else ? and if No then what will be alternative like they can create in favorite folder?
> also we need to take control on number of report that users will create. however may be users will create so many reports with huge amount of data refreshes and etc and PROD will face performance issues etc etc...
like this there can be so many considerations needs to consider
Hope this will give u some idea...
Vills -
RE: Production Environment Definition
Brad,
We use connected environments so that we do not have a single point of
failure.
We use multiple environments and connect them together in a star topology
for reliability of service. Our servers (23 in total) sit out at branches
in the back of beyond and the WAN connections between the servers are
unreliable. One needs a reliable connection to the Name Service which sits
on each Environment Manager. We have thus created 23 connected
environments with an Environment Manager on each LAN. Connected
environments are still a bit buggy but Tech Support is currently working on
fixing the last of the problems. We are still on ver 2H15 for this reason.
Disadvantages of this topology are that making distributions take a long
time because referenced partitioning cannot be scripted in fscript and
econsole only connects to one environment at a time.
There is a Forté consultant in Denver called Pieter Pretorius who has had a
lot of experience with our connected environments. It may be worth
chatting to him.
Regards,
Richard Stobart
Technical Consultant for Forté
E-mail [email protected]
Quick-mail: [email protected]
Voice: (+ 27 83) 269 1942
(+27 11) 456 2238
Fax: (+ 27 83) 8269 1942
-----Original Message-----
From: Brad Wells [SMTP:[email protected]]
Sent: Tuesday, February 10, 1998 11:52 PM
To: 'Forte Users - Sage'
Subject: Production Environment Definition
Hello again,
We are just starting to look at what it will take to setup a production
Forte environment. I have some general questions regarding
considerations that may affect the environment definition and thought
maybe some of the more experienced users could share some thoughts on the
following:
1) What factors lead to the creation of multiple production environments?
a. How many environments should you use in a production situation?
b. Do people create separate environments for separate business units?
c. Are there performance improvements to be had by restricting the
number of server and client nodes included in a single environment?
d. How do the performance benefits of multiple environments compare to
the additional complexity of managing and maintaining multiple connected
environments?
The initial need is for an environment that will service approximately 50
clients and contain a couple of server nodes (database and service
related). However, as the environment grows, it could easily grow to a
size of 600 clients encompassing approximately 15-20 server nodes.
At this point in time, there is no need for the failover support of
connected environments, but this is something we will need to add as the
environment absorbs applications with high reliability needs. Should the
environments be setup and connected right away or can this be easily
added on an "as needed" basis? What other recommendations would you
make?
Has anyone taken advantage of Forte consulting services in defining the
production environment? Where you satisfied with the results of the
service?
Thanks.
Bradley Wells
[email protected]
Strong Capital Management, Inc
http://www.strong-funds.comOn Tue, 10 Feb 98 13:52:00 PST Brad Wells <[email protected]>
writes:
At this point in time, there is no need for the failover support of
connected environments, but this is something we will need to add as
the
environment absorbs applications with high reliability needs. Should
the
environments be setup and connected right away or can this be easily
added on an "as needed" basis? What other recommendations would you
make?
From the Forte Systems Management point of view, you can add them "asneeded"
fairly easily.
Now from the application source code point of view, implementing
Fail/Over support
is a different story... You will need to check your SO's dialog
durations, handle
DistributedAccessExceptions, "warm-up" your distributed references for
F/O,
design a solution for restoring global transient data, do lots of
testings etc...
So implementing Fail/Over is not only related to systems-management
issues, it can
have some influence on your application(s) source code.
Hope this helps,
Vincent Figari
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866] -
Production Environment Definition
Hello again,
We are just starting to look at what it will take to setup a production
Forte environment. I have some general questions regarding
considerations that may affect the environment definition and thought
maybe some of the more experienced users could share some thoughts on the
following:
1) What factors lead to the creation of multiple production environments?
a. How many environments should you use in a production situation?
b. Do people create separate environments for separate business units?
c. Are there performance improvements to be had by restricting the
number of server and client nodes included in a single environment?
d. How do the performance benefits of multiple environments compare to
the additional complexity of managing and maintaining multiple connected
environments?
The initial need is for an environment that will service approximately 50
clients and contain a couple of server nodes (database and service
related). However, as the environment grows, it could easily grow to a
size of 600 clients encompassing approximately 15-20 server nodes.
At this point in time, there is no need for the failover support of
connected environments, but this is something we will need to add as the
environment absorbs applications with high reliability needs. Should the
environments be setup and connected right away or can this be easily
added on an "as needed" basis? What other recommendations would you
make?
Has anyone taken advantage of Forte consulting services in defining the
production environment? Where you satisfied with the results of the
service?
Thanks.
Bradley Wells
[email protected]
Strong Capital Management, Inc
http://www.strong-funds.comOn Tue, 10 Feb 98 13:52:00 PST Brad Wells <[email protected]>
writes:
At this point in time, there is no need for the failover support of
connected environments, but this is something we will need to add as
the
environment absorbs applications with high reliability needs. Should
the
environments be setup and connected right away or can this be easily
added on an "as needed" basis? What other recommendations would you
make?
From the Forte Systems Management point of view, you can add them "asneeded"
fairly easily.
Now from the application source code point of view, implementing
Fail/Over support
is a different story... You will need to check your SO's dialog
durations, handle
DistributedAccessExceptions, "warm-up" your distributed references for
F/O,
design a solution for restoring global transient data, do lots of
testings etc...
So implementing Fail/Over is not only related to systems-management
issues, it can
have some influence on your application(s) source code.
Hope this helps,
Vincent Figari
You don't need to buy Internet access to use free Internet e-mail.
Get completely free e-mail from Juno at http://www.juno.com
Or call Juno at (800) 654-JUNO [654-5866] -
Deploying AIA 11g composites in Production Environment
Hi all,
I am facing one trouble in deploying AIA11g composites in Production Environment. I know how to deploy codes on a usual server but in case of Production we require a single bundle containing say 100 composites for services developed.
Scenario:
Bundle all AIA composites developed into one single deliverable which can be deployed directly.
Things known or tried:
1. Deployment Plans will deploy codes to server manually which will mean that our codes have to be present on remote location so as to deploy them.
questions:
2. How can we archive MDS data (containing AIA design artifacts) and publish it in MDS.
3. Composites Such as EBS\Requester ABCS contains concrete urls so how can we make sure that they are overriden once we deploy it on Production Server with that server hostname:port.
Regards,
ankitFor publishing the changes to MDS, follow the steps -
1) Source the environment by running aiaenv.sh
2) Update UpdateMetaDataDP.xml at <AIA_INSTANCE_HOME>/config with the entries of the documents to be published.
Here is an example of entry for AIAConfigurationProperties.xml -
<fileset dir="AIA_HOME/aia_instances/INSTANCE_NAME/AIAMetaData">
<include name="config/AIAConfigurationProperties.xml" />
Make sure to create different fileset dir tag for each entry, else the documents will not be published to MDS.
3) Access the $AIA_HOME/Infrastructure/Install/config folder.
4) Execute the following command:
ant -f UpdateMetaData.xml
Hope it helps!
Maybe you are looking for
-
Need to convert an image to .jpeg using Java !
Hello, i am working in images now. i need to convert any image in the form of .jpeg using Java. in other words, i need to store an image in the form of .jpeg format. can any of u help me ! thanks in advance. - Krishna
-
hi all is there any other way of determining the driver program name for a script/smart form apart from the table tnapr? regards niki
-
How can I get my iCloud email to work?
For this entire day, my iCloud email (me.com)has not been working. I can send but there is no incoming mail. My other email (cox.net) seems to be working, both incoming and outgoing. Earlier i the day I was asked to input my password each time but
-
SOA Suite 11.1.1.2 installed in Admin mode but not able login
Hi, Product details: SOA 11.1.1.2 RCU 11.1.1.2.1 WLS 10.3.2 DB 10.2.04 Jdev 11.1.12 I installed and configured SOA 11G in Admin server. I can successfully login to Admin(Port 7001), EM console(port 7001), but when i login to SOA i am getting Server E
-
Macbook pro wireless stop responding when s/pdif port in use
Macbook pro 15 i7 user here. Have anyone experience the same issue as me? how do i resolve this issue? it stated airport detected wireless but it just dont issue DHCP ip address out. using VLC to run mkv video format. video runs fine however not the