Best practise to feed an infoobject which depend on 0MATERIAL ?

Hi experts,
I'm a newbie on SDN and SAP BW. I have a question that could appear basic for you.
I have to add a new infoobject "Article Typology" in a BW query. I will call it ZTYPOLOGY.
This infoobject will contains 3 values which will depend on the description of 0MATERIAL (field /BI0/TMATERIAL-TXTMD) :
If /BI0/TMATERIAL-TXTMD contains 'MDD', ZTYPOLOGY have to return "MDD"
Elseif   /BI0/TMATERIAL-TXTMD contains '#', ZTYPOLOGY have to return "Import"
        else ZTYPOLOGY have to return "Other".
I thought to create ZTYPOLOGY as an attribute of the masterdata 0MATERIAL, and feed it by an ABAP routine in the Update Rule which come from 0MATERIAL_ATTR. Is it a good idea ?
If no, could you advise me about the best practise ?
Don't hesitate if you need more information to understand my request.
Thanks a lot.
Tempka

Hi,
First check the source of your query, and see how do you want to see the Typology data in the report, i.e. Always updated data with current master data or the data at the time of transaction.
For e.g. If you add your Infoobject as navigation attribute then you will always see only current master data. And if you add it in your Infocube or DSO etc. then the value will be static and it will show the value which was loaded historically.
Once you finalize your approach you can simply create field routine and load your data.
Regards,
Durgesh.

Similar Messages

  • Any best practise to archive PO's which does not have corresponding invoice

    Hello,
             As part of initial implementation and conversion, We have a lot of PO's / LTA created but their corresponding invoices were never converted into SAP from legacy system.  SAP archiving program tags those as not business complete as the invoice qty does not match with po qty (there are no invoices to start with).  Just flagging 'delivery complete and final confirmation' of PO does not help.  Anybody ran into similar situation and how did they resolve it?  I am reluctant to enhance standard SAP archiving program to bypass those checks and that is my only last option. Any SAP recommended Note / best practise etc would help.
    Satyajit Deb

    Where is the invoice posted?
    was the invoice posted in the legacy system?
    Clearance of GR/IR account with MR11 will usually close such POs.

  • Best practise around handling time dependency for flat file loads

    Hi folks,
    This is a fairly common situation - handling time dependency for flat file loads. Please can anyone share their experience around handling this. One common approach is to handle the time validity changes within the flat file where it is easily changeable by the user but then again is prone to input errors by the user. Another would be to handle this via a DSO. Possibly, also have this data entered directly in BI using IP planning layouts. There is a IP planning function that allows for loading flat file data but then again, it only works without the time dependency factor.
    It would be great to hear thoughts or if anyone can point to a best practise document for such a scenario.
    Thanks.

    Bump!

  • BEST PRACTISE on users deletions HR/SU01

    Hi
    we use CUA/SSO.
    The records are fed from HR records and sent to Active Directory (AD) 
    AD brings backs the records and creates/changes users in SU01
    A function module populates the CVR (timesheet) parameter dependent on whether you are an employee or a contractor 
    Occasionally, our HR department request records to be deleted from the SAP Support team - for example if the employee or contractor hasn't in fact joined the company.
    Until some time ago, the deletion was causing problems because:
    a) the record does not get deleted in AD and there is  no way to send the deletion across after
    b) when AD tries to reprocess that specific record, LDAP connector will not find it as HR record so what happens in SU01 for some reasons, the VALID from field gets wiped out and the CVR parameter for Timesheet also...
    We have changed the process for the deletion however, I would like to ask if you know what is the best practise for this?? HR want to delete the record so it can be re-utilised
    I cannot delete those records from UMR unless I am 100% sure they have never used the system (will have to check that)
    I hope I have provided enough info on what the issue is..
    Thank you
    Nadia

    Best practice is not to delete.
    > HR want to delete the record so it can be re-utilised
    So many people with the same name? Perhaps a suffix of 2 numbers when the ID naming convention produces a clash. Besides, do your AD admins not want unique names in the AD as well?
    E.g. (just an imperfect example)
    MUSTERMA = Alfred MUSTERMan
    MUSTERMM = Manfred MUSTERMan
    MUSTER01 = Mechtilde MUSTERMuller
    > I cannot delete those records from UMR unless I am 100% sure they have never used the system (will have to check that)
    Surest way is to determine that they have never logged on before. But that does not exclude that records might exist for them, which may eventually do a "user existence check" to be read. One such example is the Security Audit Log, e.g. there may have been failed login attempts.
    Good luck,
    Julius

  • Best practise to versioning Web Service WCF

    I've a web service SOAP developed with WCF, there is a best practise to versioning it? when I change the contract I don't want to update all the client's referiments but I want simply publish a new version that coexists with the old.
    STUDY-CASE: I've a web service with one endpoint, all clients point to it, soon I will have to change the contract and I search a way to avoid that all clients crash. I know two ways to avoid this:
    publish the web service with the new contract in another server
    create a new file svc with the new contract
    there are other ways?

    It depends on many things such as what changes you are going to make, what platform your clients are built on, what's your change tracking policy, etc.
    For example if you just want to add a new property to a data contract or a new operation to a service contract it's safe to add it to your current implementation as long as the clients are version tolerant (DataContractSerializer is version tolerant) and it
    complies with your change tracking policy. However you would better stick to strict versioning strategy which requires creating a new contract and exposing it with a new endpoint in case the changes are more serious.
    I suggested reading these articles:
    Versioning Strategies
    Best Practices: Data Contract Versioning
    This should help you to choose the right version strategy and provide your with the best practices to follow.

  • Best practise to change new SUP on 6509

    Best practise to change new SUP on 6509, currently we have dual sup. is it we need to power down the switch and install ne sup?

    Hi,
    Below are few methods which you can choose:
    It really depends on which Sup you want to replace ( I mean the one which is currently Active or the Standby sup).
    Lets go step by step:
    1st-- Think you want to replace the Standby Sup in the working chasis:
    Answer: You need not have to worry, just remove the Standby Sup and install the new sup , Make sure the standby sup is running the same software version which Active has.
    2nd- Replacing the Active Sup:
    Answer: To avoid any distruption, failover to the Standby Sup so that the STandby Sup takes over the active role and then proceed with replacing the Active Sup.
    Few reference which  might help you:
    https://supportforums.cisco.com/discussion/11640021/replace-failed-sup-6500-sup32
    http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-virtual-switching-system-1440/109334-replace-vss-sup-proc-v1.html
    HTH

  • Exception Propagation - Best Practises

    Hello,
    I was thinking what is the best practise for exception propogation.
    The way I know and have been doing is to specify an error page in my web.xml and in that I get a request paramater from the sesion ( which I populate in the catch blocks across the various classes) and display it to the user and ask him to contact the admin. Ofcourse I log them using log4j.
    I was wondering if there are other ways people do this (other than just displaying a "Sorry, Application Error" page and what do you think should be the best practise of exception handling and more importantly exception propagation.
    Thanks in advance for your time
    rgds,

    Sarvananda wrote:
    ...what do you think should be the best practise of exception handling and more importantly exception propagation.The very best practice is to always handle the exception, that is to say: never use empty "catch blocks".
    As already stated there are many correct ways to handle exceptions depending largely on the result you desire according to the exception. If you want feedback for debugging: I've made the errors descriptive... class/method and exception/error included in the message to the end user. This almost never works, since they never read it and if they report it, they just say: "I got this error thingy and it said to call you..." I got smarter the second time around and put the errors in logs, so when they actually called, then I could have them look up the error for me, or even better, just send me the log so I could see any other problems they didn't bother to report.
    It sounds like you are doing web development, one thing I have done in the past is to just pop up an e-mail ready to go with all the info in it. All the end user had to do is hit send.

  • Best Practises on SMART scans

    For Exadata x2-2 is there a best practises document to enable SMART scans for all the application code on exadata x2-2?

    We cover more in our book, but here are the key points:
    1) Smarts scans require a full segment scan to happen (full table scan, fast full index scan or fast full bitmap index scan)
    2) Additionally, smart scans require a direct path read to happen (reads directly to PGA, bypassing buffer cache) - this is automatically done for all parallel scans (unless parallel_degree_policy has been changed to AUTO). For serial sessions the decision to do a serial direct path read depends on the segment size, smalltable_threshold parameter value (which is derived from buffer cache size) and how many blocks of a segment are already cached. If you want to force the use of a serial direct path read for your serial sessions, then you can set serialdirect_read = always.
    3) Thanks to the above requirements, smart scans are not used for index range scans, index unique scans and any single row/single block lookups. So if migrating an old DW/reporting application to Exadata, then you probably want to get rid of all the old hints and hacks in there, as you don't care about indexes for DW/reporting that much anymore (in some cases not at all). Note that OLTP databases still absolutely require indexes as usual - smart scans are for large bulk processing ops (reporting, analytics etc, not OLTP style single/a few row lookups).
    Ideal execution plan for taking advantage of smart scans for reporting would be:
    1) accessing only required partitions thanks to partition pruning (partitioning key column choices must come from how the application code will query the data)
    2) full scan the partitions (which allows smart scans to kick in)
    2.1) no index range scans (single block reads!) and ...
    3) joins all the data with hash joins, propagating results up the plan tree to next hash join etc
    3.1) This allows bloom filter predicate pushdown to cell to pre-filter rows fetched from probe row-source in hash join.
    So, simple stuff really - and many of your every-day-optimizer problems just disappear when there's no trouble deciding whether to do a full scan vs a nested loop with some index. Of course this was a broad generalization, your mileage may vary.
    Even though DWs and reporting apps benefit greatly from smart scans and some well-partitioned databases don't need any indexes at all for reporting workloads, the design advice does not change for OLTP at all. It's just RAC with faster single block reads thanks to flash cache. All your OLTP workloads, ERP databases etc still need all their indexes as before Exadata (with the exception of any special indexes which were created for speeding up only some reports, which can take better advantage of smart scans now).
    Note that there are many DW databases out there which are not used just only for brute force reporting and analytics, but also for frequent single row lookups (golden trade warehouses being one example or other reference data). So these would likely still need the indexes to support fast single (a few) row lookups. So it all comes from the nature of your workload, how many rows you're fetching and how frequently you'll be doing it.
    And note that the smart scans only make data access faster, not sorts, joins, PL/SQL functions coded into select column list or where clause or application loops doing single-row processing ... These still work like usual (with exception to the bloom filter pushdown optimizations for hash-join) ... Of course when moving to Exadata from your old E25k you'll see speedup as the Xeons with their large caches are just fast :-)
    Tanel Poder
    Blog - http://blog.tanelpoder.com
    Book - http://apress.com/book/view/9781430233923

  • BI Best Practises - HR Performance Management

    Hi all
    I'm looking through the BW Best Practises and don't see any for Performance Management
    (0HCM_TM_ANALYTICS_EHP3).
    Please can anybody guide me on which BP's to use in order to get the Appraisals & Qualifications Content (Cubes,update rules,infoobjects,etc)
    Thanking you

    http://help.sap.com/saphelp_nw70/helpdata/en/90/86e8f5a8eb7645b03b0b7ac107740a/frameset.htm
    in BI, Objective Setting and Appraisals is new name for Performance Management

  • What is the best practise to provide a text file for a Java class in a OSGi bundle in CQ?

    This is probably a very basic question so please bear with me.
    What is the best way to provide a .txt file to be read by a Java class in a OSGi bundle in CQ 5.5?
    I have been able to read a file called "test.txt" that I put in a structure like this /src/resources/<any-sub-folder>/test.txt  from my java class  at /src/main/java/com/test/mytest/Test.java using the bundle's getResource and getEntry calls but I was not able to use the context.getDataFile. How is this getDataFile method call to be used?
    And what if I want to read the file located in another bundle, is it possible? or can I add the file to some repository and then access it - but I am not clear how to do this.
    And I would also like to know what is the best practise if I need to provide a large  data set in a flat file to be read by a Java class in CQ5.
    Please provide detailed steps or point me to a how to guide or other helpful resources as I am a novice.
    Thank you in advance for your time and help.
    VS

    As you can read in the OSGi Core specification (section 4.5.2), the getDataFile() method is to read/write a file in the bundle's private persistent area. It cannot be used to read files contained in the bundle. The issue Sham mentions refers to a version of Felix which is not used in CQ.
    The methods you mentioned (getResource and getEntry) are appropriate for reading files contained in a bundle.
    Reading a file from the repository is done using the JCR API. You can see a blueprint for how to do this by looking at the readFile method in http://svn.apache.org/repos/asf/jackrabbit/tags/2.4.0/jackrabbit-jcr-commons/src/main/java /org/apache/jackrabbit/commons/JcrUtils.java. Unfortunately, this method is not currently usable as it was declared incorrectly (should be a static method, but is an instance method).
    Regards,
    Justin

  • Experiences on SAP Best Practise packages?

    We are considering applying a specific SAP Best Practise package on top of ECC6. It will be a new ECC6 installation.
    I have read the BP faq pages http://help.sap.com/bp_bw370/html/faq.htm and read note 1225909 - How to apply SAP Best Practices - and the specific note and the specific installation guide for required Best Practise package.
    I understand that applying a BP package is something not that easy but should speed up the implementation process.
    If you have been participating in a project that implemented SAP Best Practises on ERP, would you please share your experiences with me.
    Here are some topics for discussion:
    (BP=SAP Best Practise package)
    BP implementation could make constraints to your future EHP or SP upgrades, while BP is tightly linked to certain EHP and SP-level?
    SAP support?
    Installation process including config steps according BP installation guide is longer than usual, while there can be multiple activation and configuration steps done by others than basis consultants?
    BPs are country dependent. What if your application is used multinationally?
    What if you later find BP unneccessary?
    generate difficulties in the beginning but did considerably speed up reaching the final goal compared to ECC6 without BP.
    traps to fall in?
    typical issues that basis staff has to notify when preparing/installing a ERP system to be with BP?
    positive/negative things
    I am not saying above statements are true or false. Just wanted to charge you giving comments.
    Br: KimZi
    Edited by: Kimmzini Siewoinenski on Aug 11, 2009 8:35 PM

    Hi,
    Make sure your web service URL correct in Live Office connection and also in Xcelsius data manager.
    Did you check all connection in refresh button proprties? you may try selecting "Refresh after component are Loaded" in Refresh button properties Behavior tab.
    I think Xcelsius refresh are serial refresh so it may be possible that first component refresh is still in progress but you are expecting other component to refresh.
    Click on "Enable Load Cursor" in data manager's Usage tab, it will give you visibility of refresh. If anything refreshing you will see hour glass.
    Cheers

  • ECC 5.0 and BO without BW/BI Integration Best Practise

    Hi,
    I know that SAP can connect with BO system using RapidMart (replacing data warehouse BW/BI). Is this best practise?Because when i see the BI Platform RoadMap,I conclude that BI and BO will be integrate into one product, and i'm afraid to invest right now because in the future it seems we need to implement the BW system to make this BI system optimal.
    My company want to use BO without implement BW system right now. SAP system that we use right now is ECC 5.0 and my company only upgrade it to next version when version 5.0 not supported again.Thank you.
    Cheers,
    Satria

    Hi Satria,
    the answer depends a little bit on what the requirement is.
    RapidMarts are an option to implement a data mart solution on top of an ERP solution. On top of the Rapid Marts you can then use the BusinessObjects tools without hitting the OLTP system directly.
    Ingo

  • Best way to do a Object which holds a collection of another object type.

    I'm writing a caching object to store another object. The cache is only valid for a session, so I want a store the data in a nested table.
    I have try to simplify my example down to its core.
    How do I make this work and what is the best to index the index the items stored for fastest retrieval.
    CREATE OR REPLACE TYPE ty_item AS OBJECT (
    id_object VARCHAR2 (18),
    ORDER MEMBER FUNCTION compare (other ty_item)
    RETURN INTEGER
    CREATE OR REPLACE TYPE BODY ty_item
    AS
    ORDER MEMBER FUNCTION compare (other ty_item)
    RETURN INTEGER
    IS
    BEGIN
    IF SELF.id_object < other.id_object
    THEN
    RETURN -1;
    ELSIF SELF.id_object > other.id_object
    THEN
    RETURN 1;
    ELSE
    RETURN 0;
    END IF;
    END;
    END;
    CREATE OR REPLACE TYPE ty_item_store AS TABLE OF ty_item;
    CREATE OR REPLACE TYPE ty_item_holder AS OBJECT (
    CACHE ty_item_store,
    MEMBER FUNCTION get (p_id_object IN VARCHAR2)
    RETURN REF ty_item,
    MEMBER FUNCTION find (p_id_object IN VARCHAR2)
    RETURN REF ty_item,
    MEMBER FUNCTION ADD (p_id_object IN VARCHAR2)
    RETURN REF ty_item
    CREATE OR REPLACE TYPE BODY ty_item_holder
    AS
    MEMBER FUNCTION get (p_id_object IN VARCHAR2)
    RETURN REF ty_item
    IS
    rtn REF ty_item;
    BEGIN
    rtn := find (p_id_object);
    IF rtn IS NULL
    THEN
    rtn := ADD (p_id_object);
    END IF;
    RETURN rtn;
    END;
    MEMBER FUNCTION find (p_id_object IN VARCHAR2)
    RETURN REF ty_item
    IS
    rtn ty_item;
    BEGIN
    SELECT VALUE (ch)
    INTO rtn
    FROM CACHE ch
    WHERE ch.id_object = p_id_object;
    RETURN rtn;
    END;
    MEMBER FUNCTION ADD (p_id_object IN VARCHAR2)
    RETURN REF ty_item
    IS
    item ty_item;
    BEGIN
    item := ty_item (p_id_object);
    INSERT INTO CACHE
    VALUES (item);
    END;
    END;
    /

    Best way to do a Object which holds a collection of another object type. The best place for data in a database is.. no real surprise.. in tables. If that data is temporary of nature, global temporary tables cater for that.
    Storing/caching data using PL/SQL requires very expensive private process memory (PGA) from the server. This does not scale.
    I'm writing a caching object to store another object. Irrespective of how l33t your haxor skillz are, you will not be able to code as a sophisticated, performant and scalable PL/SQL data cache, as what already exists (as the database buffer cache) in Oracle.
    The cache is only valid for a session, so I want a store the data in a nested table.Not sure how you take one (session local data) to mean the other (oh, let's use a nested table).
    Session local data can be done using PL/SQL static variables. Can be done using name-value pairs residing in a context (Oracle namespace). Can be done using a global temporary table.
    The choice is dependent on the requirements that need to be addressed. However, the term +"caching+" has very specific connotations that say that a global temporary table is likely the best suited candidate.

  • Configure "best practise baseline" manually

    Dear experts!
    I'm studying SAP ERP as a student. I am interesting in SD. I have read the certification material TSCM and SCM.
    Now I am waiting for your advice whether I need to configure "best practise baseline" manually.
    We see, it will take a lot of time to configure "best practise baseline" manually.
    However, I was thinking that it would be helpfull to my general view on ERP, as well as, I can use it to study "best practise industry" when I complete it.
    How do you think?
    Waiting for your advice!
    Thank you!
    Best regard!
    Tang Dark

    Hi,
    Best Practises is a specific SAP solution and to get it you have to specifically buy it - it doesn't come standard with every ERP solution.
    What is Best Practises? Best Practises is basically a set of pre configured stuff (BC sets) which you upload into SAP, hence reducing your configuration.
    Baseline is basically the general configuration (per module) that a company requires to carry on with config. Things like org structure, etc.
    On top of the baseline you can then start loading other more specific BC sets per module.
    Best Practises is a methodology to address smaller companies because it reduces the blueprint and configuration phases. Projects can be finalised within 6 months compared to std ERP implementations that may take 1 to 2 years.
    If you want to be a proper SD consultant do not study Best Practises only. Do your SAP academy. Best Practises you can learn after you know SAP.
    As to Industries specific, these are add-ons to address specific industry requirements. For example, bills of services, is something used in project environments such as engineering & construction. So the funcitonality is not necessary, for example, the retail industry which sells only finished products.
    Again, firs learn your SAP and then you can focus on learning others. If you learn SAP you can work on any Best Practices package or any Industry, If you study Best PRactises you can only work with BP, and the same goes for industry solutions.

  • Business Content Best Practise

    Hi Guys
    Just a quick request -
    I have activated BC a couple of times, but each time it takes longer than it should - missing certain areas, activate far too much etc.
    Apart from 'help.sap.com.', does anyone have any Best Practise guides or docs on BC, to allow me to cut down on my activation.
    Thanks.

    Hi Scott,
    I would really commont only to activate with only necassary objects (befor and after is very painfull). Just collect the data structures like DataSource, InfoSource, DataStore Object, InfoCube, ..., Queries (then all Infoobjects gets collected). Then collected the linkages (tranfer rules, transformations, etc.) - these can easily be found in the help menue. after the collection activate it in batch.
    best regards clemens

Maybe you are looking for

  • Deskjet 3050 J610a Wireless issues

    I'm trying to set up my Deskjet 3050 J610a to use over my wireless network, the printer sets it self up on the network and connects fine to the router, but is unavailable when i use the wireless network setup tool. Router:   Linksys Wag160n Auth: WPA

  • Exchange 2010 SP1 Installtion failed

    I tried to install exchange 2010 on one of our child domain (windows server 2003 SP2) and i succes to make a DCPROMO and the prerequest check finish succesfully after 2.5 houre then when try to install i have recieved our that setup failed to reconne

  • WHAT is a summary tab ?

    apparently I have to click on the summary tab wheres that and wat is it

  • Help please, double clicking on an image in raw does nothing

    I'm double clicking on an image in adobe bridge and normally it would then open in photoshop but not it's not. What did I do and how do I fix it so that it does?

  • SAP Fiori FireFox Browser Support

    Hi All We working on SAP Fiori and our SAP UI5 is hosted on  Gateway SAP UI5 is of version 1.12.5 . SAP Gateway SP - 7 SAP ECC SP - 8 Let us know if there are any information or note avilable which talks about Firefox supported by SAP UI5 1.12.5. Ple