Best practice regarding automatic IU documents

Hi Team,
In case of automatic postings at IU elimination how to configure document type in order to allow different subassignments like segments.
This sub assignments field in case of automatic posting documents allowing at header level and at the time of posting system allowing to post documents with different sub assignemnts at document but when you go and see individual docuement, system is not displaying that document and throughing error message.
Can any one advice.
Thanks and regards
Naveen.KV

The subassignments are not changeable for automatic IU documents as the values are the same as those of the source records.
One option is to post manual adjusting entries to change the IU elimination subassignments.
Another option is to create a reclassification method/task to automatically change the subassignments for the IU eliminations.

Similar Messages

  • Best Practices regarding program RCOCB004

    Dear Colleagues
    I'd like to discuss the Best Practices regarding the setup of jobs to send Process Messages
    In my company we have a batch job with two steps. Each steps contain one variant of program RCOCB004.
    The first step will send messages with Status "To be sent", "To be resubmitted" and "To be resubm. w/warng"
    The second step will send messages with Status "Destination Error", "Terminated", "Incomplete"
    However, this job sometimes fails with error "Preceding job not yet completed (plant US07)"
    I'd like to discuss what is best way to set up this job in order to avoid this error and also improve performance.
    Thanks and Regards

    Dear,
    To keep the number of message logs in the system low, proceed as follows:
          1. Check the report variants for report RCOCB004 used in your send jobs.The sending of messages in status "Destination error" or "Terminated" is only useful if the error is corrected without manual intervention;for example with messages of category PI_PHST, if the sequence of the messages or time events was swapped in the first send process.
          2. Regularly delete the logs of the messages that were not sent to the destination PI01 using report RCOCB009 (Transaction CO62).
          3. Check whether it it is actually required to send messages to the destination PI01.This is only useful if you want to evaluate the data of these messages by means of the process data evaluation, or if the message data including the logs are to be a part of the process data documentation or the batch log. Remove destination PI01 for the message categories to which the above-mentioned criteria does not apply.You can activate destination PI01 again at a later stage.
          4. If you still want to send process messages to destination PI01, carry out a regular archiving of your process orders.As a result of the archiving, the message copies and logs in the process message record are also deleted.
          5. If the described measures do not suffice, you can delete the logs using Transaction SLG2.
    Control recipe send = RCOCB006 and you need to set the job to run after event SAP_NEW_CONTROL_RECIPES
    Process message send = RCOCB002 (cross plant) and RCOCB004 (specific plant). You need to create variants for these.
    Check the IMG documentation in PPPI for control recipes and process instructions where there is more information about this. Also standard SAP help is quite good on these points.
    Finally, if you are automatically generating process instructions then you need program RCOCRPVG plus appropriate variants.
    Hope it will help you.
        Regards,
    R.Brahmankar

  • Best Practices regarding AIA and CDP extensions

    Based on the guide "AD CS Step by Step Guide: Two Tier PKI Hierarchy Deployment", I'll have both
    internal and external users (with a CDP in the DMZ) so I have a few questions regarding the configuration of AIA/CDP.
    From here: http://technet.microsoft.com/en-us/library/cc780454(v=ws.10).aspx
    A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined by the certificate issuer. Since the roots certificate issuer is the root CA, there is no value in including a CRL distribution point for
    the root CA. In addition, some applications may detect an invalid certificate chain if the root certificate has a CRL distribution point extension set.A root CA certificate should have an empty CRL distribution point because the CRL distribution point is defined
    by the certificate issuer. 
    To have an empty CDP do I have to add these lines to the CAPolicy.inf of the Offline Root CA:
    [CRLDistributionPoint]
    Empty = true
    What about the AIA? Should it be empty for the root CA?
    Using only HTTP CDPs seems to be the best practice, but what about the AIA? Should I only use HTTP?
    Since I'll be using only HTTP CDPs, should I use LDAP Publishing? What is the benefit of using it and what is the best practice regarding this?
    If I don't want to use LDAP Publishing, should I omit the commands: certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA / certutil -f -dspublish "A:\Fabrikam Root
    CA.crl" CA01
    Thank you,

    Is there any reason why you specified a '2' for the HTTP CDP ("2:http://pki.fabrikam.com/CertEnroll/%1_%3%4.crt"
    )? This will be my only CDP/AIA extension, so isn't it supposed to be '1' in priority?
    I tested the setup of the offline Root CA but after the installation, the AIA/CDP Extensions were already pre-populated with the default URLs. I removed all of them:
    The Root Certificate and CRL were already created after ADCS installation in C:\Windows\System32\CertSrv\CertEnroll\ with the default naming convention including the server name (%1_%3%4.crt).
    I guess I could renamed it without impact? If someday I have to revoke the Root CA certificate or the certificate has expired, how will I update the Root CRL since I have no CDP?
    Based on this guide: http://social.technet.microsoft.com/wiki/contents/articles/15037.ad-cs-step-by-step-guide-two-tier-pki-hierarchy-deployment.aspx,
    the Root certificate and CRL is publish in Active Directory:
    certutil -f -dspublish "A:\CA01_Fabrikam Root CA.crt" RootCA
    certutil -f -dspublish "A:\Fabrikam Root CA.crl" CA01
    Is it really necessary to publish the Root CRL in my case?
    Instead of using dspublish, isn't it better to deploy the certificates (Root/Intermediate) through GPO, like in the Default Domain Policy?

  • Best practice to split up documents into articles?

    Dear Adobe,
    at the today's InDesign and DPS session I was asking how the DPS folks, split up the InDesign documents to upload as different articles to the dps? Bob wanted to ask Collin, but forgot to do it!
    I will explain my situation (supposing that most producers sitting in the same boat):
    I have a monthly magazine with about 30 different articles. I receive the print file to create an iPad 1/2, iPad 3 and beginning in next month also an iPhone and Android (also IceCream Sandwich) version. The magazine is just vertical orientation.
    Now with CS6, you do a lot of promotional work for the alternate layout feature.
    But what is the recommondation or best practice at Adobe to upload this one InDesign document with viewer builder to get an iPad 1/2, iPad 3 and iPhone rendition containing the separation of the different articles? Please let me know a workaround how you do this!
    Kind regards
    Yves

    as you know, you need one indesign file for each article (that file can
    contain cover all devices down to the iphone version). the next article
    needs a new indesign file. you can drag-and-drop pages from one indesign
    file into the new layout to move pages across documents.
    if youwant to synchronize settings, try the book feature. but I never
    tested the book feature on CS6/alternative layout compatibility.
    —Johannes

  • Looking for some best practice regarding Content Administrator access

    Hi. I am looking for some best practice or rule of thumb from SAP or from different companies how they address Portal Content Administrator access in Production environment. Basically, our company is implementing portal to work with SAP BW.  We are on SP 9. Basically, I am trying to determine if we should have 1-2 Portal Content Administrator in Production with 24/7 access or we should limit them from NOT having this.  Can you share with me some ideas of what is right? and what is not?
    Should we have access in Production? Or Should we have this access but limited? By the way, our users are allow to Publish BI reports/queries into Production.

    Hello Michael,
    Refer to this guide about managing initial content in portal.
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/00bfbf7c-7aa1-2910-6b9e-94f4b1d320e1
    Regards
    Deb
    [Reward Points for helpful answers]

  • Best Practice regarding using and implementing the pref.txt file

    Hi All,
    I would like to start a post regarding what is Best Practice in using and implementing the pref.txt file. We have reached a stage where we are about to go live with Discoverer Viewer, and I am interested to know what others have encountered or done to with their pref.txt file and viewer look and feel..
    Have any of you been able to add additional lines into the file, please share ;-)
    Look forward to your replies.
    Lance

    Hi Lance
    Wow, what a question and the simple answer is - it depends. It depends on whether you want to do the query predictor, whether you want to increase the timeouts for users and lists of values, whether you want to have the Plus available items and Selected items panes displayed by default, and so on.
    Typically, most organizations go with the defaults with the exception that you might want to consider turning off the query predictor. That predictor is usually a pain in the neck and most companies turn it off, thus increasing query performance.
    Do you have a copy of my Discoverer 10g Handbook? If so, take a look at pages 785 to 799 where I discuss in detail all of the preferences and their impact.
    I hope this helps
    Best wishes
    Michael Armstrong-Smith
    URL: http://learndiscoverer.com
    Blog: http://learndiscoverer.blogspot.com

  • Best Practice for Storing Sharepoint Documents

    Hi,
    Is there a best practice where to store the documents of a Sharepoint? I heard some people say it is best to store sharepoint documents directly into file system. Ohters said that it is better to store sharepoint documents into a SQL Server.

    What you are referring to is the difference between SharePoint's native storage of documents in SQL, and the option/ability to use SQL's filestream functionality for Remote BLOB Storage (also known as RBS). Typically you are much better off sticking with
    SQL storage for BLOBs, except in a very few scenarios.
    This page will help you decide if RBS is right for your scenario:
    https://technet.microsoft.com/en-us/library/ff628583.aspx?f=255&MSPPError=-2147217396
    -Corey

  • Best practice: interface for editing documents

    Hello
    I use IDeveloper 11g 11.1.1.3.0, ADF Faces
    I have got a task to create a web interface for editing documents.
    Every document have a head and a specification.
    Head have a lot of field, and every row in a specification have a lot of fields also.
    There are few PL/SQL procedures I need call to save document in the database, and I need to call them in the single transaction for it.
    So, I need to fill up all document and only after that save it to the database.
    For fill up some of fields I need to use component like List of Values (with autoSuggestBehavior and with selecting value from the list).
    There is next question: what is the best practice to develop interface like this?
    I had some troubles when I tried to use ADF BC.
    May be, there are tutorials?
    Will be very thankful for any advices or links.
    Anatolii

    Hello
    I use IDeveloper 11g 11.1.1.3.0, ADF Faces
    I have got a task to create a web interface for editing documents.
    Every document have a head and a specification.
    Head have a lot of field, and every row in a specification have a lot of fields also.
    There are few PL/SQL procedures I need call to save document in the database, and I need to call them in the single transaction for it.
    So, I need to fill up all document and only after that save it to the database.
    For fill up some of fields I need to use component like List of Values (with autoSuggestBehavior and with selecting value from the list).
    There is next question: what is the best practice to develop interface like this?
    I had some troubles when I tried to use ADF BC.
    May be, there are tutorials?
    Will be very thankful for any advices or links.
    Anatolii

  • Best Practice regarding a Re-classification

    Hi
    We are on 4.7. An amount had been transferred from Trade Debors to Trade Debtors Retention (Special GL) using FB01 (Cr. customer A & Dr. Customer A - with Sp GL. This caused us an issue during month-end by creating a DUMMY PC entry. This, I understand is due to the fact that, when transferring between subledgers without touching a Balance Sheet account the document is not carrying a PC entry. Therefore resulting in DUMMY entry.
    I would like to know the best practice of such reclassification so that I can ask the users to avoide such postings in furture. 
    Thanks
    Nadini

    Eric, you'll get a faster qualified response if you post to the  Business Objects Enterprise Administration forum as that forum is monitored by qualified support for BOE

  • Best Practice Regarding Large Mobility Groups

    I was reading the WLC Best Practices and was wondering if anyone could put a number to this statement regarding the largest number of APs, end users, and controllers which can contained in a Mobility Group.
    We would be deploying WiSMs in two geographically dispersed data centers. No voice is being used or is planned.
    "Do not create unnecessarily large mobility groups. A mobility group should only have all controllers that have access points in the area where a client can physically roam, for example all controllers with access points in a building. If you have a scenario where several buildings are separated, they should be broken into several mobility groups. This saves memory and CPU, as controllers do not need to keep large lists of valid clients, rogues and access points inside the group, which would not interact anyway.
    Keep in mind that WLC redundancy is achieved through the mobility groups. So it might be necessary in some situations to increase the mobility group size, including additional controllers for
    redundancy (N+1 topology for example)."
    I would be interested in hearing about scenarios where a Catalyst 6509 with 5 WiSM blades is deployed in data centers which back each other up for cases of disaster recovery.
    Can I have one large Mobility group? This would be easier to manage.
    or
    Would it be better to back up each blade with a blade in the second data center? This would call for smaller Mobility Groups.
    Be glad to elaborate further if anyone has a similar experience and needs more information.
    All responses will be rated.
    Thanks in advance.
    Paul

    Well, that is a large group indeed, and I would say most organizations use nested groups instead of adding these behemoths to the directory as they are quite difficult to work with.  If it's a one-time thing, you could create it manually in bite-sized
    chunks with LDIF or the like, so that FIM only has to do small delta changes afterwards.
    The 5,000 member limit mostly applies to groups prior to the change to linked value storage.  What is your forest functional level, and have you verified that this group is using linked values?
    Steve Kradel, Zetetic LLC

  • Best practices on how to document code?

    Hi,
    I tried searching the web for tutorials or examples, but couldn't come up with anything. Can anyone sum up some of their best practices to document LabVIEW code? I'm talking about a fairly elaborate program, built with a state machine approach. It has several subVI's. Since it is important, that other people can understand my code, I guess documentation is fairly important, but NI hasn't got a tutorial for it yet. Maybe a suggestion ?
    Thank you for your time! This forum has been a valuable companion already!
    Giovanni
    PS: I'm using LabVIEW 8.5 btw
    Giovanni Vleminckx
    Using LabVIEW 8.5 on Windows7
    Solved!
    Go to Solution.

    Ben wrote:
    F. Schubert wrote:
    Document the state machine. You can use bubble-and-arrow (using any Draw program) or uml (for beginners it is easy to start with dia). Create a png from the diagram and paste it on the block diagram.
    Some use a stacked sequence with frame 0 the picture and frame 1 the code.
    For the wires of the state machine, it's good if you lable them on both shift registers (outside the while).
    Felix
    That brings up a good point, both sides or just one?
    For small diagrams that easily fit on one screen, putting the lables on both sides can somethimes increase the diagram size by 30%.
    When the diagrams are small, I usually only lable one side. The other plus with only putting th labels on one side is that I only ohave one set of labels to allign with the SR while two side would require twice as much shuffling.
    So I often bend that rule.
    Ben
    Of course If you MUST use old LabVIEW verions without Wire Labels the allignment (and BD Cleanup - clean-up) can get to be a headache.  I sell "maintainability" to my customers and explain the life cycle of the systems they are purchasing.  I can usually sell the latest LabVIEW version with this argument. 
    But, I do have customers stuck in 6.1 so the point is valid.  So (IMHO) SRs need a label on only one side, since they run straight, either on the terminal(prefered) or outside the loop.  Linked Tunnels need labels on both sides and tunnels that are not linked need just one label.
    And Felix brought up some great points.  Any project should have a "Tree vi" with major VIs / Modules on the BD (Hint show labels).  This Makes navagation very easy when you have a few hundred subvis.  And of course, this practice REQUIRES meaningful Icons and VI names or you are just looking at clutter.
    Jeff

  • Best practice regarding package-private or public classes

    Hello,
    If I was, for example, developing a library that client code would use and rely on, then I can see how I would design the library as a "module" contained in its own package,
    and I would certainly want to think carefully about what classes to expose to outside packages (using "public" as the class access modifier), as such classes would represent the
    exposed API. Any classes that are not part of the API would be made package-private (no access modifier). The package in which my library resides would thereby create an
    additional layer of encapsulation.
    However, thus far I've only developed small applications that reside in their own packages. There does not exist any "client code" in other packages that relies on the code I've
    written. In such a case, what is the best practice when I choose to make my classes public or package-private? Is it relevant?
    Thanks in advance!

    Jujubi wrote:
    ...However, thus far I've only developed small applications that reside in their own packages. There does not exist any "client code" in other packages that relies on the code I've
    written. In such a case, what is the best practice when I choose to make my classes public or package-private? Is it relevant?I've always gone by this rule of thumb: Do I want others using or is it appropriate for others to use my methodes. Are my methods "pure" and not containing package speicific coding. Can I guarentee that everything will be initialized correctly if the package is included in other projects.
    Basically--If I can be sure that the code will do what it is supposed to do and I've not "corrupted" the obvious meaning of the method, then I usually make it public--otherwise, the outside world, other packages, does not need to see it.

  • Best Practice Regarding Maintaining Business Views/List of Values

    Hello all,
    I'm still in the learning process of using BOXI to run our Crystal Reports.  I was never familiar with the BO environment before but I have recently learned that every dynamic parameter we create for a report, the Business View/Data Connectors/LOV are created on the Enterprise Repository the moment the Crystal Report is uploaded.
    All of our reports are authored from a SQL Command statement and often times, various reports will use the same field name from the database for different reports.  For example, we have several reports that use the field name "LOCATION" that exists on a good number of tables on the database.
    When looking at the Repository, I've noticed there are several variations of LOCATION, all which I'm assuming belongs to one specific report.  Having said that, I see that it can start to become a nightmare in trying to figure out which variation of LOCATION belongs to what report.  Sooner or later, the Repository will need to be maintained a bit cleaner, and with the rate we author reports, I forsee a huge amount of headache down the road.
    With that being said, what's the best practice in a nutshell when trying to maintain these repository items?  Is it done indirectly on the Crystal Report authoring side where you name your parameter field identifiable to a specific report?  Or is it done directly on the Repository side?
    Thank you.

    Eric, you'll get a faster qualified response if you post to the  Business Objects Enterprise Administration forum as that forum is monitored by qualified support for BOE

  • Best practice regarding work flow (cutting, applying effects, exporting etc)

    Hi!
    I've been asked to shoot and edit a music video for a friend of mine, and I'm trying to figure out the best way to manage this project in PrE (in what order to do things and so on). I have a picture in my head which make sence, but I'd like to have it confirmed. If you've been following the "Very disappointed with Premiere Elements" thread, you know I'm not a fan of how the applying of effects works, when having a lot of cuts between scenes and clips etc. A few of the steps below is meant to make that process more effective.
    So, here's my idea, from the begining and in detail:
    1. Download the appropriate clips from the camera (in this case 1280x720, H.264 mov's from an EOS 500D).
    2. Create a PrE-project for each clip and maybe trim the in and outs a bit, if needed.
    3. Export each clip to uncompressed avi's.
    4. Create the main project file and import all the uncompressed avi's.
    5. Insert the clips in on appropriate tracks in the timeline.
    6. Do all the cutting, triming and sync as complete as possible, without thinking about effects.
    7. When finished, open up each of the smaller clip projects and add the desired effects. This will mainly include contrasts, color corrections, noise etc, in order to get the right look and feel to each clip/scene.
    8. Again, export the clips to uncompressed avi's and overwrite the previous versions.
    9. Open up the main project, which now should contain the clips with look-and-feel effects visible.
    10. Add some additional effects if needed.
    11. Export/share, and you're done.
    Of course I will end up going back and forth through these steps anyway, but as a basic plan it seems reasonable. I see three main positive aspects:
    1. The look-and-feel effects will be applied on the raw material, before the converting process. This should result in a slightly better quality. Perhaps not noticeable, but anyway.
    2. The main project will be more CPU friendly and easier to work with.
    3. If I want to tweek the look-and-feel effect to a clip/scene, I don't have to do it on every split (I will have a lot of splits, so applying and changing the effect parameters will be time consuming and ineffective). Of course, opening up the clip's specific project, changing the effect and then export to avi, will also take time, but point 1 and 2 makes up for that.
    Have in mind that it is a music video project, to put things in the right context. We'll probably have a few parallel stories/scenes, with lots of cutting in and out between them. The timeline will probably look insane.
    So, am I thinking in the right direction here? Any traps I might fall into along the way?
    Regards
    Fredrik

    Fredrik,
    Though similar to your workflow, here is how I would do it.
    Import those "raw" Clips into a Project, and do my Trimming in that Project, relying on the Source Monitor to establish the In & Out Points for each, and also using different Instances of any longer "master Clip.". I would also do my CC (Color Correction), and all density (Levels, etc.) Effects here. Do not Trim too closely, as you will want to make sure that you have adequate Handles to work with later on.
    Use the WAB (Work Area Bar) to Export the material that I needed in "chunks," using either Lagarith Lossless CODEC, or UT Lossless CODEC *
    Import my music into a new Project and listen over and over, making notes on what visuals (those Exported/Shared Clips from above) I have at my disposal. At this point, I would also be making notes as to some of the Effects that I felt went with the music, based on my knowledge of the available visuals.
    Import my Exported/Shared, color graded Clips.
    Assemble those Clips, and Trim even more.
    Watch and listen carefully, going back to my notes.
    Apply any additional Effects now.
    Watch and listen carefully.
    Tighten any edits, adjust any applied Effects, and perhaps add (or remove existing) more Effects.
    Watch and listen carefully.
    Output an "approval" AV for the band/client.
    Tweak, as is necessary.
    Output "final approval" AV.
    Tweak, as is necessary.
    Export/Share, to desired delivery formats.
    Invoice the client.
    Cash check.
    Declare "wine-thirty."
    This is very similar to your proposed workflow.
    Good luck,
    Hunt
    * I have used Lagarith Lossless CODEC with my PrE 4.0, but have not tried UT. Both work fine in PrPro, so I assume that UT Lossless will work in PrE too. These CODEC's are fairly quick in processing/Exporting, and offer the benefit of smaller files, than Uncompressed AVI. They are visually lossless. The resultant files will NOT be tiny, so one would still need a good amount of HDD space. Neither CODEC introduces any artifacts, or color degredation.

  • Any "Best Practice" regarding use of zfs in LDOM with zones

    I have 3 different networks and I want to create a guest-domain for each of the three networks on the same control domain.
    Inside each guest-domain, I want to create 3 zones.
    To make it easy to handle growth and also make the zones more portable, I want to create a zpool inside each guest domain and then a zfs for each zoneroot.
    By doing this I will be able to handle growth by adding vdisks to the zpool(in the guest domain) and also to migrate individual zones by using zfs send/receive.
    In the "LDoms Community Cookbook", I found a description on how to use zfs clone in the control domain to decrease deploy time of new guest domains:
    " You can use ZFS to very efficiently, easily and quickly, take a copy of a previously prepared "golden" boot disk for one domain and redeploy multiple copies of that image as a pre-installed boot disk for other domains."
    I can see clear advantages in using zfs in both the control domain and the guest domain, but what is the downside?
    I ends up with a kind of nested zfs where I create a zpool inside a zpool, the first in the control domain and the second inside a guest domain.
    How is zfs caching handled, will I end up with a solution with performance problems and a lot of I/O overhead?
    Kindest,
    Tor

    I'm not familiar with the Sybase agent code and you are correct, only 15.0.3 seems to be supported. I think we'd need a little more debug information to determine if there was a workaround. May be switching on *.info messages in syslogd.conf might get some more useful hints (no guarantee).
    Unfortunately, I can't comment on if, or when, Sybase 15.5.x might be supported.
    Regards,
    Tim
    ---

Maybe you are looking for

  • External Harddisk read only after the system hand during I copy some folders to my external drive.

    I'm using macbook pro mid 2010. I could not write my external HDD 1TB (WD) after I copying the folder from local folder to external folder via my parallel desktop. During the copying, the system hang halfway and no choice i terminate the copy. SInce

  • Running MBP 15 with lid closed and external screen = damaging screen?

    I connected an external screen to my laptop with the lid closed and woke it up through my external keyboard. I am vaguely remembering reports stating that heat is vented through the keyboard as well as through the vents in the back. Does that mean th

  • Timeline Navigation

    When making custom components, you can end up having a lot of different states within the component - thus the potential for many different transition combinations.  Many of my state names get cut off in the timeline window.  The window can be expand

  • Anyone have problems after installing Boris FX into FCE???

    Hi Folks, I fear I've made a big boo-boo. The other day I downloaded and installed a program called Boris FX ("Boris FX Continuum", I believe was the full name). It was supposed to merely give me more options for video effects and transitions. Howeve

  • IMovie with iLife 08

    When working with transitions on a project I cannot get the transition thumbnail to view them individually and drag and drop into the project. I was on the phone with customer service for about 3 hours today and the issue is still not resolved. When