Best Practices for AD Import into FIM MV

 I'm working on a project which will require AD users to be imported into FIM and then be shown in the FIM portal. I know I could import users from a specific OU, but what I'm not sure about is how FIM and the MV keeps track of users.
For example:
I configure "john.doe" in OU "London" and then import the account into the MV.
john.doe is moved into the Hong Kong OU and I run a delta sync - can FIM track this change?
Would I be better off using an anchor on objectsid to deal with user account moves and users changing names\logons?
Thanks

When you are configuring AD MA you are selecting OUs in which you are managing object through MA. As long as you will keep user in selected OUs FIM will "see" this user and will import changes on object.
FIM is using objectGUID as anchor internally (I assume this - never looked into agent code :) ) but presents DN as an anchor for you. You can't change this. When user will be moved between OUs you will see it as rename operation and you will see it as DN
change.
Tomek Onyszko, memberOf Predica FIM Team (http://www.predica.pl), IdAM knowledge provider @ http://blog.predica.pl

Similar Messages

  • Tips and best practices for translating C into LabVIEW? SERIOUS newbie...

    I need to translate a C function into LabVIEW.  This will be my *first* LabVIEW project.  I've been reading some tutorials, and I'm still struggling to get my brain out of "C/C++ mode" and learn the LabVIEW paradigms.
    Structurally, the function that I need to translate gets called from a while-loop and performs a bunch of mathematical calculations. 
    The basic layout is something like this (this obviously isn't the actual code, it just illustrates the general flow control and techniques that it uses).
    struct Params
    // About 20 int and float parameters
    int CalculateMetrics(Params *pParams,
    float input1, float input2 [etc])
    int errorCode = 0;
    float metric1;
    float metric2;
    float metric3;
    // Do some math like:
    metric1 = input1 * (pParams->someParam - 5);
    metric2 = metric1 + (input2 / pParams->someOtherParam);
    // Tons more simple math
    // A couple for-loops
    if (metric1 < metric2)
    // manipulate metric1 somehow
    else
    // set some kind of error code
    errorCode = ...;
    if (!errorCode)
    metric3 = metric1 + pow(metric2, 3);
    // More math...
    // etc...
      // update some external global metrics variables  
    return errorCode;
    I'm still too green to understand whether or not a function like this can translate cleanly from C to LabVIEW, or whether the LabVIEW version will have significant structural differences. 
    Are there any general tips or "best practices" for this kind of task?
    Here are some more specific questions:
    Most of the LabVIEW examples that I've seen (at least at the beginner level) seem to heavily rely on using the front panel controls  to provide inputs to functions.  How do I build a VI where the input arguments(input1, input2, etc) come as numbers, and aren't tied to dials or buttons on the front panel?
    The structure of the C function seems to rely heavily on the use of stack variables like metric1 and metric2 in order to perform calculations.  It seems like creating temporary "stack" variables in LabVIEW is possible, but frowned upon.  Is it possible to keep this general structure in the LabVIEW VI without making the code a mess?
    Thanks guys!

    There's already a couple of good answers, but to add to #1:
    You're clearly looking for a typical C-function. Any VI that doesn't require front panel opening (user interaction) can be such a function.
    If the front panel is never opened the controls are merely used to send data to the VI, much like (identical to) the declaration of a C-function. The indicators can/will be return values.
    Which controls and indicators are used to sending data in and out of a VI is almost too easy; Click the icon of the front panel (top right) and show connector, click which control/indicator goes where. Done. That's your functions declaration.
    Basically one function is one VI, although you might want to split it even further, dont create 3k*3k pixel diagrams.
    Depending on the amount of calculations done in your If-Thens they might be sub vi's of their own.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • Best Practice for Keeping Imported Music Organized? - Help Please

    I download tunes and am looking at the best way to keep iTunes up to date with new downloads (as opposed o CD rips). Am i best to have all files in one overall folder and then just use iTunes to import the files from whatever folder they reside in (after the download) to the copy in iTunes and delete the original? To date Ive been keeping them scattered across drives and just using iTunes as the pointer to the original but now losing track as to what has been imported and what hasnt. Perhaps the above is a better way and if so....best to delete iTunes, reinstall with an empty database, import all and then delete from original flile locations? Help and thanks in advance!!
    Porsche216

    I import similar to the way you described. Import then delete original. I keep a folder just for importing. iTunes has a function consolidating the library. It will copy from the multiple locations to the iTunes library location. You would then have to delete the originals to free up space.

  • Transports to Production - Best Practice for Timing Imports

    I would like to compile a list of possible issues/problems which can occur through Transporting requests to the Production system during Productive time, i.e. 'Business Time'.
    This is obviously against SAP recommendations that, little if not no user activity should take place during a import, but i would like if possible some more detail on the risks.
    Thanks for any help.

    Hello Ashley,
    If you are looking for list of problems and issues related to TMS and what needs to be done with particular problem
    Here is very useful link
    http://www.geocities.com/SiliconValley/Grid/4858/sap/Basis/tpprobs_solutions.htm
    Thanks & Regards
    Vivek

  • Best practice for a deplomyent (EAR containing WAR/EJB) in a productive environment

    Hi there,
    I'm looking for some hints regarding to the best practice deployment in a productive
    environment (currently we are not using a WLS-cluster);
    We are using ANT for buildung, packaging and (dynamic) deployment (via weblogic.Deployer)
    on the development environment and this works fine (in the meantime);
    For my point of view, I would like to prefere this kind of Deploment not only
    for the development, also for the productive system.
    But I found some hints in some books, and this guys prefere the static deployment
    for the p-system.
    My question now:
    Could anybody provide me with some links to some whitepapers regarding best practice
    for a deployment into a p-system ??
    What is your experiance with the new two-phase-deploment coming up with WLS 7.0
    Is it really a good idea to use the static deployment (what is the advantage of
    this kind of deployment ???
    THX in advanced
    -Martin

    Hi Siva,
    What best practise are you looking for ? If you can be specific on your question we could provide appropriate response.
    From my basis experience some of the best practices.
    1) Productive landscape should have high availability to business. For this you may setup DR or HA or both.
    2) It should have backup configured for which restore has been already tested
    3) It should have all the monitoring setup viz application, OS and DB
    4) Productive client should not be modifiable
    5) Users in Production landscape should have appropriate authorization based on SOD. There should not be any SOD conflicts
    6) Transport to Production should be highly controlled. Any transport to Production should be moved only with appropriate Change Board approvals.
    7) Relevant Database and OS security parameters should be tested before golive and enabled
    8) Pre-Golive , Post Golive should have been performed on Production system
    9) EWA should be configured atleast for Production system
    10) Production system availability using DR should have been tested
    Hope this helps.
    Regards,
    Deepak Kori

  • In the Begining it's Flat Files - Best Practice for Getting Flat File Data

    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?
    Thanks,
    Gregory

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for including additional DLLs/data files with plug-in

    Hi,
    Let's say I'm writing a plug-in which calls code in additional DLLs, and I want to ship these DLLs as part of the plug-in.  I'd like to know what is considered "best practice" in terms of whether this is ok  (assuming of course that the un-installer is set up to remove them correctly), and if so, where is the best place to put the DLLs.
    Is it considered ok at all to ship additional DLLs, or should I try and statically link everything?
    If it's ok to ship additional DLLs, should I install them in the same folder as the plug-in DLL (e.g. the .8BF or whatever), in a subfolder of the plug-in folder or somewhere else?
    (I have the same question about shipping additional files too, such as data or resource files.)
    Thanks
                             -Matthew

    Brother wrote:
    I probably should have posed this question here before I delved into writing Java to get data for reports, but better late than never.
    Our ERP is written in COBOL. We have a third party ODBC which allows us to access data using a version of SQL. I have several Java sources compiled in my database that access the data and return something relevant. The Java sources are written in a procedural style rather than taking advantage of object oriented programming with attributes and methods.
    OO is a choice not a mandate. Using Java in a procedural way is certainly not ideal but given that it is existing code I would look more into whether is well written procedural code rather than looking at the lack of OO.
    Now that I am becoming more comfortable with the Java language, I would greatly appreciate any feedback as to best practices for incorporating Java into my database.
    My guess is that it would be helpful to model the ERP "tables" with Java classes that would have attributes, which correspond to the fields, and methods to return the attributes in an appropriate way. Does that sound reasonable? If so, is there a way to automate the task of modeling the tables? If not reasonable, what would you recommend?Normally you create a data model driven by business need. You then implement using whatever means seem expedient in terms of other business constraints to closely model that data model.
    It is often the case that there is a strong correlation between data models and tables but certainly in my experience it is rare when there are not other needs driven by the data model (such as how foreign keys and link tables are implemented and used.)

  • Best practice for importing non-"Premiere-ready" video files

    Hello!
    I work with internal clients that provide me with a variety of differnet video types (could be almost ANYTHYING, WMV, MP4, FLV).  I of course ask for AVIs when possible, but unfortunately, I have no control over the type of file I'm given.
    And, naturally, Premiere (just upgraded to CS5) has a hard time dealing with these files.  Unpredictable, ranging from working fine to not working at all, and everything in between.  Naturally, it's become a huge issue for turnaround time.
    Is there a best practice for preparing files for editing in Premiere?
    I've tried almost everything I can think of:  converting the file(s) to .AVIs using a variety of programs/methods.  Most recently, I tried creating a Watch Folder in Adobe Media Encoder and setting it for AVI with the proper aspect ratio.  It makes sense to me that that should work:using an Adobe product to render the file into something Premiere can work with.
    However, when I imported the resulting AVI into Premiere, it gave me the Red Line of Un-renderness (that is the technical term, right?), and had the same sync issue I experienced when I brought it in as a WMV.
    Given our environment, I'm completely fine with adding render time to the front-end of projects, but it has to work.  I want files that Premiere likes.
    THANK YOU in advance for any advice you can give!
    -- Dave

    I use an older conversion program (my PrPro has a much older internal AME, unlike yours), DigitalMedia Converter 2.7. It is shareware, and has been replaced by Deskshare with newer versions, but my old one works fine. I have not tried the newer versions yet. One thing that I like about this converter is that it ONLY uses System CODEC's, and does not install its own, like a few others. This DOES mean that if I get footage with an oddball CODEC, I need to go get it, and install it on the System.
    I can batch process AV files of most types/CODEC's, and convert to DV-AVI Type II w/ 48KHz 16-bit PCM/WAV Audio and at 29.97 FPS (I am in NTSC land). So far, 99% of the resultant converted files have been perfect, whether from DivX, WMV, MPEG-2, or almost any other format/CODEC. If there is any OOS, my experience has been that it will be static, so I just have to adjust the sync offset by a few frames, and that takes care of things.
    In a few instances, the PAR flag has been missed (Standard 4:3 vs Widescreen 16:9), but Interpret Footage has solved those few issues.
    Only oddity that I have observed (mostly with DivX, or WMV's) is that occasionally, PrPro cannot get the file's Duration correct. I found that if I Import those problem files into PrElements, and then just do an Export, to the same exact specs., that resulting file (seems to be 100% identical, but something has to be different - maybe in the header info?) Imports perfectly into PrPro. This happens rarely, and I have the workaround, though it is one more step for those. I have yet to figure out why one very similar file will convert with the Duration info perfect, and then a companion file will not. Nor have I figured out exactly what is different, after running through PrE. Every theory that I have developed has been shot down by my experiences. A mystery still.
    AME works well for most, as a converter, though there are just CODEC's, that Adobe programs do not like, such as DivX and Xvid. I doubt that any Adobe program will handle those suckers easily, if at all.
    Good luck,
    Hunt

  • What is the best practice for inserting (unique) rows into a table containing key columns constraint where source may contain duplicate (already existing) rows?

    My final data table contains a two key columns unique key constraint.  I insert data into this table from a daily capture table (which also contains the two columns that make up the key in the final data table but are not constrained
    (not unique) in the daily capture table).  I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns).  Currently, what I do is to select * into a #temp table from the join
    of daily capture and final data tables on these two key columns.  Then I delete the rows in the daily capture table which match the #temp table.  Then I insert the remaining rows from daily capture into the final data table. 
    Would it be possible to simplify this process by using an Instead Of trigger in the final table and just insert directly from the daily capture table?  How would this look?
    What is the best practice for inserting unique (new) rows and ignoring duplicate rows (rows that already exist in both the daily capture and final data tables) in my particular operation?
    Rich P

    Please follow basic Netiquette and post the DDL we need to answer this. Follow industry and ANSI/ISO standards in your data. You should follow ISO-11179 rules for naming data elements. You should follow ISO-8601 rules for displaying temporal data. We need
    to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> My final data table contains a two key columns unique key constraint. [unh? one two-column key or two one column keys? Sure wish you posted DDL] I insert data into this table from a daily capture table (which also contains the two columns that make
    up the key in the final data table but are not constrained (not unique) in the daily capture table). <<
    Then the "capture table" is not a table at all! Remember the fist day of your RDBMS class? A table has to have a key.  You need to fix this error. What ETL tool do you use? 
    >> I don't want to insert rows from daily capture which already exists in final data table (based on the two key columns). <<
    MERGE statement; Google it. And do not use temp tables. 
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best practice for the test environment  &  DBA plan Activities    Documents

    Dears,,
    In our company, we made sizing for hardware.
    we have Three environments ( Test/Development , Training , Production ).
    But, the test environment servers less than Production environment servers.
    My question is:
    How to make the best practice for the test environment?
    ( Is there any recommendations from Oracle related to this , any PDF files help me ............ )
    Also please , Can I have a detail document regarding the DBA plan activities?
    I appreciate your help and advise
    Thanks
    Edited by: user4520487 on Mar 3, 2009 11:08 PM

    Follow your build document for the same steps you used to build production.
    You should know where all your code is. You can use the deployment manager to export your configurations. Export customized files from MDS. Just follow the process again, and you will have a clean instance not containing production data.
    It only takes a lot of time if your client is lacking documentation or if you re not familiar with all the parts of the environment. What's 2-3 hours compared to all the issues you will run into if you copy databases or import/export schemas?
    -Kevin

  • Kernel: PANIC! -- best practice for backup and recovery when modifying system?

    I installed NVidia drivers on my OL6.6 system at home and something went bad with one of the libraries.  On reboot, the kernel would panic and I couldn't get back into the system to fix anything.  I ended up re-installing the OS to recovery my system. 
    What would be some best practices for backing up the system when making a change and then recovering if this happens again?
    Would LVM snapshots be a good option?  Can I recovery a snapshot from a rescue boot?
    EX: File system snapshots with LVM | Ars Technica -- scroll down to the section discussing LVM.
    Any pointers to documentation would be welcome as well.  I'm just not sure what to do to revert the kernel or the system when installing something goes bad like this.
    Thanks for your attention.

    There is often a common misconception: A snapshot is not a backup. A snapshot and the original it was taken from initially share the same data blocks. LVM snapshot is a general purpose solution which can be used, for example, to quickly create a snapshot prior to a system upgrade, then if you are satisfied with the result, you would delete the snapshot.
    The advantage of a snapshot is that it can be used for a live filesystem or volume while changes are written to the snapshot volume. Hence it's called "copy on write (COW), or copy on change if you want. This is necessary for system integrity to have a consistent data status of all data at a certain point in time and to allow changes happening, for example to perform a filesystem backup. A snapshot is no substitute for a disaster recovery in case you loose your storage media. A snapshot only takes seconds, and initially does not copy or backup any data, unless data changes. It is therefore important to delete the snapshot if no longer required, in order to prevent duplication of data and restore file system performance.
    LVM was never a great thing under Linux and can cause serious I/O performance bottlenecks. If snapshot or COW technology suits your purpose, I suggest you look into Btrfs, which is a modern filesystem built into the latest Oracle UEK kernel. Btrfs employs the idea of subvolumes and is much more efficient that LVM because it can operate on files or directories while LVM is doing the whole logical volume.
    Keep in mind however, you cannot use LVM or Btrfs with the boot partition, because the Grub boot loader, which loads the Linux kernel, cannot deal with LVM or BTRFS before loading the Linux kernel (catch22).
    I think the following is an interesting and fun to read introduction explaining basic concepts:
    http://events.linuxfoundation.org/sites/events/files/slides/Btrfs_1.pdf

  • Best practice for putting together scenes in a Flash project?

    Hi, I'm currently working on a flash project with the following characteristics:
    using a PC
    2048x1080 pixels
    30 fps
    One audio file that plays (once) continuously across the whole project
    there are actions that relate to the audio, so the timing is important
    at least 10 scenes
    about 7 minutes long total
    current intent is for it to be played in a modern theater as a surprise
    What is the best practice for working on this project and then compiling it together?
    Do it all in one project file?
    Split the work into different project (xfl) files for each scene and then put it together when all the scenes are finalized?
    Use one project file but create different "scenes" for each respective scene?  I think this is the "classic" way (?).
    Make the scenes "movie clips" and then insert them into the timeline with the audio as its own layer?
    Other?
    I'm currently working on it by having it all in one project file.  But I've noticed that there's some lag (or it gets choppy) at certain parts during playback and the SWF history shows 3.1 MB with a yellow triangle with exclamation point symbol.  Thanks in advance. 

    you would only do that if it makes your job easier.  generally speaking, it would not.
    when trying to sync sound and animation i think most authors find it easiest to use graphic symbols because you can see their animation when scrubbing the main timeline.  with movieclips you only see their animation when testing.
    however, if you're going to use actionscript to control some of your symbols, those symbols should be movieclips.

  • What is the best practice for AppleScript deployment on several machines?

    Hi,
    I am developing some AppleScripts for my colleagues at work and I don't want to visit each of them to deploy my AppleScript on their Macs.
    So, what is the best practice for AppleScript deployment on several machines?
    Is there an installer created by the Automator available?
    I would like to have something like an App to run which puts all my AppleScript relevant files into the right place onto a destination Mac.
    Thanks in advance.
    Regards,

    There's really no 'right place' to put applescripts.  folder action scripts nees to go in ~/Library/Scripts/Folder Action Scripts (or /Library/Scripts/Folder Action Scripts), anything you want to appear in the script menu needs to go in ~/Library/Scripts (or /Library/Scripts), script applications should probably go in the Applications folder, but otherwise scripts can be placed anywhere.  conventional places to put them are in ~/Library/Scripts or in a subfolder of ~/Library/Application Support if they are run by an application.  The more important issue is to make sure you generalize the scripts: use the path to command to get local paths rather than hard-coding them in, make sure you test to make sure applications or unic executables you call are present ont he machine, use script bundles rather tna scripts if you scripts have private resources.
    You can write a quick installer script if you want to make sure scripts go where you want them.  Skeleton verion looks like this:
    set scriptsFolder to path to scripts folder from user domain
    set scriptsToExport to path to resource "xxx.scpt" in directory "yyy"
    tell application "Finder"
      duplicate scriptsToExport to scriptsFolder with replacing
    end tell
    say "Scripts are installed"
    save this as a script application, then open the application pacckage and create a folder called "yyy" in the resources folder and copy your script "xxx.scpt" into it.  other people can run the app to install the script.

  • BI Best Practice for Chemical Industry

    Hello,
    I would like to know if anyone is aware of SAP BI  Best Practice for Chemicals.And if so can anyone please post a link aswell.
    Thanks

    Hi Naser,
    Below information will helps you in detail explanation regarding Chemical industry....
    SAP Best Practices packages support best business practices that quickly turn your SAP ERP application into a valuable tool used by the entire business. You can evaluate and implement specific business processes quickly u2013 without extensive Customization of your SAP software. As a result, you realize the benefits with less Effort and at a lower cost than ever before. This helps you improve operational efficiency while providing the flexibility you need to be successful in highly demanding markets. SAP Best Practices packages can benefit companies of all sizes, including global enterprises creating a corporate template for their subsidiaries.
    Extending beyond the boundaries of conventional corporate divisions and functions, the SAP Best Practices for Chemicals package is based on SAP ERP; the SAP Environment, Health & Safety (SAP EH&S) application; and the SAP Recipe Management application. The business processes supported by SAP Best Practices for Chemicals encompass a wide range of activities typically found in a chemical industry
    Practice:
    u2022 Sales and marketing
    u2013 Sales order processing
    u2013 Presales and contracts
    u2013 Sales and distribution (including returns, returnables, and rebates, with quality management)
    u2013 Inter- and intracompany processes
    u2013 Cross-company sales
    u2013 Third-party processing
    u2013 Samples processing
    u2013 Foreign trade
    u2013 Active-ingredient processing
    u2013 Totes handling
    u2013 Tank-trailer processing
    u2013 Vendor-managed inventory
    u2013 Consignment processing
    u2013 Outbound logistics
    u2022 Supply chain planning and execution Supply and demand planning
    u2022 Manufacturing planning and execution
    u2013 Manufacturing execution (including quality management)
    u2013 Subcontracting
    u2013 Blending
    u2013 Repackaging
    u2013 Relabeling
    u2013 Samples processing
    u2022 Quality management and compliance
    u2013 EH&S dangerous goods management
    u2013 EH&S product safety
    u2013 EH&S business compliance services
    u2013 EH&S industrial hygiene and safety
    u2013 EH&S waste management
    u2022 Research and development Transformation of general recipes
    u2022 Supplier collaboration
    u2013 Procurement of materials and services (Including quality management)
    u2013 Storage tank management
    u2013 E-commerce (Chemical Industry Data Exchange)
    u2022 Enterprise management and support
    u2013 Plant maintenance
    u2013 Investment management
    u2013 Integration of the SAP NetWeaver Portal component
    u2022 Profitability analysis
    More Details
    This section details the most common business scenarios u2013 those that benefit most from the application of best practices.
    Sales and Marketing
    SAP Best Practices for Chemicals supports the following sales and marketingu2013related business processes:
    Sales order processing u2013 In this scenario, SAP Best Practices for Chemicals supports order entry, delivery, and billing. Chemical industry functions include the following:
    u2022 Triggering an available-to-promise (ATP) inventory check on bulk orders after sales order entry and automatically creating a filling order (Note: an ATP check is triggered for packaged material.)
    u2022 Selecting batches according to customer requirements:
    u2022 Processing internal sales activities that involve different organizational units
    Third-party and additional internal processing u2013 In this area, the SAP Best Practices for Chemicals package provides an additional batch production step that can be applied to products previously produced by either continuous or batch processing. The following example is based on further internal processing of plastic granules:
    u2022 Purchase order creation, staging, execution, and completion
    u2022 In-process and post process control
    u2022 Batch assignment from bulk to finished materials
    u2022 Repackaging of bulk material
    SAP Best Practices for Chemicals features several tools that help you take advantage of chemical industry best practices. For example, it provides a fully documented and reusable prototype that you can turn into a productive solution quickly. It also provides a variety of tools, descriptions of business scenarios, and proven configuration of SAP software based on more than 35 years of working with the
    Chemical industry.
    SAP Functions in Detail u2013 SAP Best Practices for Chemicals
    The package can also be used to support external toll processing such as that required for additional treatment or repackaging.
    Tank-trailer processing u2013 In this scenario, SAP Best Practices for Chemicals helps handle the selling of bulk material, liquid or granular. It covers the process that automatically adjusts the differences between the original order quantities and the actual quantities filled in the truck. To determine the quantity actually filled, the tank trailer is weighed before and after loading. The delta weight u2013 or quantity filled u2013 is transmitted to the SAP software via an order confirmation. When the delivery for the sales order is created, the software automatically adjusts the order quantity with the confirmed filling quantity.The customer is invoiced for the precise quantity filled and delivered.
    Supply Chain Planning and Execution
    SAP Best Practices for Chemicals supports supply chain planning as well as supply chain execution processes:
    Supply and demand planning u2013 Via the SAP Best Practices for Chemicals package, SAP enables complete support for commercial and supply-chain processes in the chemical industry, including support for integrated sales and operations planning, planning strategies for bulk material, and a variety of filling processes with corresponding packaging units. The package maps the entire supply chain u2013 from sales planning to material requirements planning to transportation procurement.
    Supplier Collaboration
    In the procurement arena, best practices are most important in the following
    Scenario:
    Procurement of materials and services:
    In this scenario, SAP Best Practices for Chemicals describes a range of purchasing processes, including the following:
    u2022 Selection of delivery schedules by vendor
    u2022 Interplant stock transfer orders
    u2022 Quality inspections for raw materials, including sampling requests triggered
    by goods receipt
    Manufacturing Scenarios
    SAP Best Practices for Chemicals supports the following sales and
    Manufacturingu2013related business processes:
    Continuous production u2013 In a continuous production scenario, SAP Best Practices for Chemicals typifies the practice used by basic or commodity chemical producers. For example, in the continuous production of plastic granules, production order processing is based on run-schedule headers. This best-practice package also describes batch and quality management in continuous production. Other processes it supports include handling of byproducts,co-products, and the blending process.
    Batch production u2013 For batch production,
    SAP Best Practices for Chemicals typifies the best practice used by specialty
    chemical producers. The following example demonstrates batch production
    of paint, which includes the following business processes:
    u2022 Process order creation, execution, and completion
    u2022 In-process and post process control
    u2022 Paperless manufacturing using XMLbased Process integration sheets
    u2022 Alerts and events
    u2022 Batch derivation from bulk to finished materials
    Enterprise Management and Support
    SAP Best Practices for Chemicals also supports a range of scenarios in this
    area:
    Plant maintenance u2013 SAP Best Practices for Chemicals allows for management
    of your technical systems. Once the assets are set up in the system, it focuses on preventive and emergency maintenance. Tools and information support the setup of a production plant with assets and buildings.Revenue and cost controlling u2013 The package supports the functions that help you meet product-costing requirements in the industry. It describes how cost centers can be defined, attached
    to activity types, and then linked to logistics. It also supports costing and settlement of production orders for batch and continuous production. And it includes information and tools that help you analyze sales and actual costs in a margin contribution report.
    The SAP Best Practices for Chemicals package supports numerous integrated
    business processes typical of the chemical industry, including the following:
    u2022 Quality management u2013 Supports integration of quality management concepts across the entire supplychain (procurement, production, and sales), including batch recall and complaint handling
    u2022 Batch management u2013 Helps generate batches based on deliveries from vendors or because of company production or filling, with information and tools for total management of batch production and associated processes including batch  derivation, batch information cockpit, and a batchwhere- used list
    u2022 Warehouse management u2013 Enables you to identify locations where materials
    or batch lots are stored, recording details such as bin location and other storage information on dangerous goods to help capture all information needed to show compliance with legal requirements
    Regards
    Sudheer

Maybe you are looking for

  • How to add header and button

    i want to add header to my ALV report i am using cl_salv_table method i want to write a few sentences. in addition i want to add a button to my report, which will display the values the user put in the selection screen. please give me detailed explen

  • IPhoto 08 library transfer

    I migrated my photos from iPhoto 6 to my new MacBook Pro with iPhoto 08. I imported the photos using "Import to Library" but noticed that iPhoto 08 reorganizes all the pictures when viewed in the "Events" mode. I have several photos that are duplicat

  • Clear by assignment field and customer

    Hello, I am facing the following situation: I am working on an interface SAP-Payment service provider.  This provider is sending me a total amount (sum of several invoices)in order to recognize every invoice separately, I asked to inform in the assig

  • BW Upgradation Project from BW 3.5 t o BI 7.0

    Dear SDNS,                          We got an requirement to do an upgradation project for BW 3.5 to BI 7.0... This is the first time am involving for Upgradation..Am An BW Technical Consultant.. Can anyone one sugggest at scratch level wat are the r

  • Good battery life

    my iphone battery when fully charged last 1 full day and 3/4 of new day with avg usage (some phone calls, alot of txt, internet usage and video watching. I have 3G on, but i got push off, GPS off, BT off and wifi off. is this consider good, avg, or b