Best Practice for the Vendor Consignment Process
Hiii ,,,
Can anybody have the best practices for the Vendor consignment process???
Please Send me the document.
Please explain me the Consignment process in SAP??
Thanx & Regards,
Kumar Rayudu
Hi Kumar,
In order to have Consigment in SAP u need to have master data such as material master, vendor master and purchase inforecord of consignment type. U have to enter the item category k when u enter PO. The goods receipt post in vendor consignment stock will be non valuated.
1. The intial steps starts with raising purchase order for the consignment item
2. The vendor recieves the purchase order.
3. GR happens for the consignment material.
4. Stocks are recieved and placed under consignment stock.
5. When ever we issue to prodn or if we transfer post(using mov 411) from consignment to own stock then liability occurs.
6. Finally comes the settlement using mrko. You settle the amount for the goods which was consumed during a specific period.
regards
Anand.C
Similar Messages
-
Best practice for the test environment & DBA plan Activities Documents
Dears,,
In our company, we made sizing for hardware.
we have Three environments ( Test/Development , Training , Production ).
But, the test environment servers less than Production environment servers.
My question is:
How to make the best practice for the test environment?
( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
Also please , Can I have a detail document regarding the DBA plan activities?
I appreciate your help and advise
Thanks
Edited by: user4520487 on Mar 3, 2009 11:08 PMFollow your build document for the same steps you used to build production.
You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
-Kevin -
Best Practice for the Service Distribution on multiple servers
Hi,
Could you please suggest as per the best practice for the above.
Requirements : we will use all features in share point ( Powerpivot, Search, Reporting Service, BCS, Excel, Workflow Manager, App Management etc)
Capacity : We have 12 Servers excluding SQL server.
Please do not just refer any URL, Suggest as per the requirements.
Thanks
srabonHow about a link to the MS guidance!
http://go.microsoft.com/fwlink/p/?LinkId=286957 -
Req:SAP Best practice for the Funds Management
Dear all,
Let me know where I can get the SAP Best practice for the Funds Management . Waiting for your valuable reply.
Regards
ManoharHello Manohar,
You can find documentation in links below:
Industry Solution Master Guide - SAP for Public Sector:
https://websmp105.sap-ag.de/~form/sapnet?_SHORTKEY=00200797470000065911
SAP Best Practices for Public Sector:
http://help.sap.com/ SAP Best Practices -> Industry Packages -> Public
Sector
Online Library for Funds Management:
http://help.sap.com/saphelp_erp2005vp/helpdata/en/41/c62c6d6d84104ab938a
a7eae51db06/frameset.htm
I hope it helps
Best Regards,
Vanessa Barth. -
Best practice for the Update of SAP GRC CC Rule Set
Hi GRC experts,
We have in a CC production system a SoD matrix that we would like to modified extensively. Basically by activating many permissions.
Which is a best practice for accomplish our goal?
Many thanks in advance. Best regards,
ImanolHi Simon and Amir
My name is Connie and I work at Accenture GRC practice (and a colleague of Imanolu2019s). I have been reading this thread and I would like to ask you a question that is related to this topic. We have a case where a Global Rule Set u201CLogic Systemu201D and we may also require to create a Specific Rule Set. Is there a document (from SAP or from best practices) that indicate the potential impact (regarding risk analysis, system performance, process execution time, etc) caused by implementing both type of rule sets in a production environment? Are there any special considerations to be aware? Have you ever implemented this type of scenario?
I would really appreciate your help and if you could point me to specific documentation could be of great assistance. Thanks in advance and best regards,
Connie -
Best Practice for the database owner of an SAP database.
We recently had a user account removed from our SAP system when this person left the agency. The account was associated with the SAP database (he created the database a couple of years ago).
I'd like to change the owner of the database to <domain>\<sid>adm (ex: XYZ\dv1adm) as this is the system admin account used on the host server and is a login for the sql server. I don't want to associate the database with another admin user as that will change over time.
What is the best practice for database owner for and SAP database?
Thanks
Laurie McGinleyHi Laura
I'm not sure if this is best practise or not, but I've always had the SA user as the owner of the database. It just makes it easier for restores to other systems etc.
Ken -
What are the best practices for the RCU's schemas
Hi,
I was wondering if there is some best practices about the RCU's schemas created with BIEE.
I already have discoverer (and application server), so I have a metadata repository for the Application Server. I will upgrade Discoverer 10g to 11, so I will create new schema with RCU in my metada repository (MR) of the Application Server. I'm wondering if I can put the BIEE's RCU schemas in the same database.
Basically,
1. is there a standard for the PREFIX ?
2. If I have multiple components of Fusion in the same Database, I will have multiples PREFIX_MDS schema ? Can they have the same PREFIX ? Or They all need to have a different prefix ?
For exemple: DISCO_MDS and BIEE_MDS or I can have DEV_MDS and this schema is valid for both Discoverer and BIEE.
Thank you !What are the best practices for exception handling in n-tier applications?
The application is a fat client based on MVVM pattern with
.NET framework.
That would be to catch all exceptions at a single point in the n-tier solution, log it and create user friendly messages displayed to the user. -
Best practices for the datapush in Blazeds?
Hello,
Currently I'm working on the data push using Blazeds. I'm currently wondering about the best practices that needs to be followed to reduce the hits to the server when using Data push/Increase the performance, Could you please guide me or direct me to good resources on this. Also i'm currently using default Amf for the data push, help me choose the Default channel for this process.Hi,
According to
this documentation, “You must configure a new name in Domain Name Services (DNS) to host the apps. To help improve security, the domain name should not be a subdomain
of the domain that hosts the SharePoint sites. For example, if the SharePoint sites are at Contoso.com, consider ContosoApps.com instead of App.Contoso.com as the domain name”.
More information:
http://technet.microsoft.com/en-us/library/fp161237(v=office.15)
For production hosting scenarios, you would still have to create a DNS routing strategy within your intranet and optionally configure your firewall.
The link below will show how to create and configure a production environment for apps for SharePoint:
http://technet.microsoft.com/en-us/library/fp161232(v=office.15)
Thanks
Patrick Liang
Forum Support
Please remember to mark the replies as answers if they
help and unmark them if they provide no help. If you have feedback for TechNet
Subscriber Support, contact [email protected]
Patrick Liang
TechNet Community Support -
BEST PRACTICE FOR THE REPLACEMENT OF REPORTS CLUSTER
Hi,
i've read the noter reports_gueide_to_changed_functionality on OTN.
On Page 5 ist stated that reports cluster is deprecated.
Snippet:
Oracle Application Server High Availability provides the industrys most
reliable, resilient, and fault-tolerant application server platform. Oracle
Reports integration with OracleAS High Availability makes sure that your
enterprise-reporting environment is extremely reliable and fault-tolerant.
Since using OracleAS High Availability provides a centralized clustering
mechanism and several cutting-edge features, Oracle Reports clustering is now
deprecated.
Please can anyone tell me, what is the best practice to replace reports cluster.
It's really annoying that the clustering technology is changing in every version of reports!!!
martinhello,
in reality, reports server "clusters" was more a load balancing solution that a clustering (no shared queue or cache). since it is desirable to have one load-balancing/HA approach for the application server, reports server clustering is deprecated in 10gR2.
we understand that this frequent change can cause some level of frustration, but it is our strong believe that unifying the HA "attack plan" for all of the app server components will utimatly benefit custoemrs in simpifying their topologies.
the current best practice is to deploy LBRs (load-balancing routers) with sticky-routing capabilites to distribute requests across middletier nodes in an app-server cluster.
several custoemrs in high-end environments have already used this kind of configuration to ensure optimal HA for their system.
thanks,
philipp -
Best practice for the use of reserved words
Hi,
What is the best practice to observe for using reserved words as column names.
For example if I insisted on using the word comment for a column name by doing the following:
CREATE TABLE ...
"COMMENT" VARCHAR2(4000),
What impact down the track could I expect and what problems should I be aware of when doing something like this?
Thank You
BenHi, Ben,
Benton wrote:
Hi,
What is the best practice to observe for using reserved words as column names.Sybrand is right (as usual): the best practice is not to use them
For example if I insisted on using the word comment for a column name by doing the following:
CREATE TABLE ...
"COMMENT" VARCHAR2(4000),
What impact down the track could I expect and what problems should I be aware of when doing something like this?Using reserved words as identifiers is asking for trouble. You can expect to get what you ask for.
Whatever benefits you may get from naming the column COMMENT rather than, say, CMNT or EMP_COMMENT (if the table is called EMP) will be insignificant compared to the extra debugging you will certainly need. -
What is the best practice for the WLI?
Hello to everyone,
I am a newbie to the Weblogic Integration Server and I had read the docs of WLI
from the BEA, like "Programming BPM Client Applications". But I found it is bitter
and astringent. I am wondering there is documents and samples for Weblogic Intergration
Server for the entry level. I 'd appreciate it if anyone could give some comments.
Thanks.
LeonI was wondering if anybody knew of any bea documents that outlined best practices
in terms of naming standards for business operations, event keys, tasks etc...
"Tinnapat Chaipanich" <[email protected]> wrote:
Hi
Why not try BPM tutorial :)
http://edocs.bea.com/wli/docs70/bpmtutor/index.htm
Regards,
Tinnapat.
"Leon" <[email protected]> wrote in message
news:3ea7ffbd$[email protected]..
Hello to everyone,
I am a newbie to the Weblogic Integration Server and I had read thedocs
of WLI
from the BEA, like "Programming BPM Client Applications". But I foundit
is bitter
and astringent. I am wondering there is documents and samples for WeblogicIntergration
Server for the entry level. I 'd appreciate it if anyone could givesome
comments.
Thanks.
Leon -
Any best practices for the iPad mini????
I am in the beginning stages of designing my mag for the iPad.... now the iPad mini seems to be all the hype and the latest news states that the mini may out sell the larger one.
So... I know that the dimensions are 1x1 on bumping down to the smaller screen... but what about font sizes? what about the experience? Anyone already ahead of the game?
I have my own answers to these questions, but any one out there have some best practice advice or links to some articles they find informative...I think 18-pt body text is fine for the iPad 2 but too small for the iPad mini. Obviously, it depends on your design and which font you're using, but it seems like a good idea to bump up the font size a couple points to account for the smaller screen.
For the same reason, be careful with small buttons and small tap areas.
I've also noticed that for whatever reason, MSOs and scrollable frames in PNG/JPG articles look great on the iPad 2 but look slightly more pixelated on the iPad Mini. It might just be my imagination.
Make sure that you test your design on the Mini. -
Defragmenting Mac (Leopard) best practice for the safest disk optimisation
What is the best way to defragment?
Its always dangerous to perform disk related stuff, sometimes some of the apps can lead to file corruption, I know lot cases where almost all the third party software has caused issues.
I have disk warrior, Techtool pro, drive genius, which one is safest? I know most of them require to boot of media to perform on boot drive.
Recently, i upgraded to 320gb 7200 rpm and since i had about 190gb of data, i thought disk utility will be faster, and it did copy in 1.5 hour but it didn't defragment my hard drive since it does block level copy. My computer is IN FACT RUNNING SLOWER.
Im thinking of carbon copy cloner to external hard drive, file level copy, and then boot of restore dvd 10.5 to perform restore function by disk utility which is lot faster since it does block level copy.I use idefrag.
http://www.coriolis-systems.com/iDefrag-faq.php
from iDefrag help:
Why Defragment?
It has often been asserted that defragmentation (or disk optimization) is not a good idea on
systems using Apple’s HFS+ filesystem. The main reasons given for this historically have been:
HFS+ is very much better at keeping files defragmented than many other commodity filesystems.
Advanced features in recent versions of HFS+ can easily be disrupted by a defragmentation tool
that does not support them, resulting in decreased performance.
There is a risk associated with defragmentation.
Whilst these arguments are certainly valid, they are not the whole story. For one thing, iDefrag,
unlike most other disk defragmentation tools, fully supports the most recent features of HFS+,
namely the metadata zone (or “hot band”) and the adaptive hot file clustering support added in
Mac OS X 10.3. Not only does it avoid disrupting them, but it is capable of fixing disruption caused
by other software by moving files into or out of the metadata zone as appropriate.
Sensible arguments for occasional optimization of your disk include:
HFS+ is not very good at keeping free space contiguous, which can, in turn, lead to large files
becoming very fragmented, and can also cause problems for the virtual memory subsystem on
Mac OS X.
Older versions of the Mac OS are not themselves aware of the metadata zone policy, and may
disrupt its performance.
HFS+ uses B-Tree index files to hold information about the filesystem. If a large number of files
are placed on a disk, the filesystem may have to enlarge these B-Tree structures; however, there
is no built-in mechanism to shrink them again once the files are deleted, so the space taken up
by these files has been lost.
Whilst HFS+ is good at keeping individual files defragmented, mechanisms like Software Update
may result in files that are components of the same piece of software being scattered across the
disk, leading to increased start-up times, both for Mac OS X itself and for applications software.
This is a form of fragmentation that is typically overlooked.
Defragmenting disk images can be helpful, particularly if they are to be placed onto a CD/DVD, as
seeks on CD/DVD discs are particularly expensive.
Some specific usage patterns may cause fragmentation despite the features of HFS+ that are
designed to avoid it.
We do not recommend very frequent optimization of your disk; optimizing a disk can take a
substantial amount of time, particularly with larger disks, far outweighing the benefits that are
likely to be obtained by (say) a weekly optimization regime.
Optimization may make more sense, however, following large software updates, or on an
occasional basis if you notice decreased performance and lots of hard disk seeking on system
start-up or when starting an application.
Kj -
The default values written to magnus.conf are suitable for most installations.
If you are interested in tuning your web server for performance, the "Performance Tuning, Sizing, and Scaling Guide" at http://docs.iplanet.com/docs/manuals/enterprise.html
has some suggestions for magnus.conf values. -
Best Practices for FSCM Multiple systems scenario
Hi guys,
We have a scenario to implement FSCM credit, collections and dispute management solution for our landscape comprising the following:
a 4.6c system
a 4.7 system
an ECC 5 system
2 ECC6 systems
I have documented my design, but would like to double check and rob minds with colleagues regarding the following areas/questions.
Business partner replication and synchronization: what is the best practice for the initial replication of customers in each of the different systems to business partners in the FSCM system? (a) for the initial creation, and (b) for on-going synchronization of new customers and changes to existing customers?
Credit Management: what is the best practice for update of exposures from SD and FI-AR from each of the different systems? Should this be real-time for each transaction from SD and AR (synchronous) or periodic, say once a day? (assuming we can control this in the BADI)
Is there any particular point to note in dispute management?
Any other general note regarding this scenario?
Thanks in advance. Comments appreciated.Hi,
I guess when you've the informations that the SAP can read and take some action, has to be asynchronous (from non-SAP to FSCM);
But when the credit analysis is done by non-SAP and like an 'Experian', SAP send the informations with invoices paid and not paid and this non-SAP group give a rate for this customer. All banks and big companies in the world does the same. And for this, you've the synchronous interface. This interface will updated the FSCM-CR (Credit), blocking or not the vendor, decreasing or increasing them limit amount to buy.
So, for these 1.000 sales orders, you'll have to think with PI in how to create an interface for this volume? What parameters SAP does has to check? There's an time interval to receive and send back? Will be a synchronous or asynchronous?
Contact your PI to help think in this information exchange.
Am I clear in your question?
JPA
Maybe you are looking for
-
Attachments in Standard Responses?
Hi Gurus, I am trying to have standard responses in Emails in IC Web Client. I have done all the configurations and the standard response is working fine. I have created a mail form. Now I am trying to add an attachment in the mail form which should
-
Mail Adapter Problem --- Unable to save deleted messages
Hi All, I'm using a sender mail adapter. The XI adapter is polling the mailbox properly. I have choosen the option <b>Delete Messages Once Read</b> and I have also mentioned a folder for the deleted messages. But the messages after processing are del
-
superdrive in macbook & mac mini are and dvdram drive [read/write] ? if yes what is the speed ? Can accept disks cd/dvd created on a windows machine ? mac mini is compatible with all Sony displays like SONY 17 inch SDM-X73/B .26 DELUXEPRO 1280X1024 D
-
HAS ANYONE ELSE NOTICED THIS WITH MAVERICKS
I can't help but notice since I loaded Mavericks, that my Mac is running way too much. It runs and rattles away like I'm calculating the postion of each star in the galaxie. I have never heard it work so hard, and It's not even doing anything. It's l
-
My iPod Nano is full and I need to delete some songs. How do I do this?
How do I delete songs from the iPod Nano? It is full and I want to delete some songs. The Apple help page did not help.