Best Practices - Telco - PM

Dear All,
I am in need of some u201Cbest practicesu201D in asset management in Telecommunications industry.
On of my LE clients would like to implement asset management. The concentration will be PM - equipment tracing & tracking in the system. A u201Cbest-practiceu201D asset management system in their SAP ECC is the idea.
An insight into best practices of Plant Maintenance and equipment trace & tracking in Telecommunications Industry needed.
u2022     Asset coding/naming design in Telecommunications  industry (especially in u201Cnetwork assetsu201D u2013 if there is some kind of a best practice how to name the assets (hierarchies, naming conventions etc)).
u2022     Insight into plant maintenanceu2019s core functionality in Telco
u2022     Any tracing & tracking system proposal u2013 barcode and/or RFID technologies for telecommunication asset management u2013 any insight into partners working for this purpose.
They are specifically interested in Deutsche Telekom
Best regards
Yavuz Durgut - SAP Turkey

Hi,
You have a good start.  What you need to do is 1.) Find out what the requirements are -- what does your user want.... if this is a fact finding mission (e.g., they want to see what's in the system) then your requirement becomes load the data in R/3, so figure out what they configured in PM and use those definitions as your requirements.   2.) Use those requirements to find data in the fields listed in the MultiProvider, InfoCubes, or DataStore Objects sections in your first link ... in other words, now that you know what data to look for, look for it in the Data Targets (MP's, Cubes and DSO's).  If you find some of the data you want, then trace back the infosources and determine what datasources in R/3 load the data you are looking for.  After all that, check those datasources for any additional fields you may need and add them in.
So, if your company doesn't maintain equipment costs or maintenance costs for equipment, then you don't have to worry about 0PM_MP04.  Use this type of logic to whittle down to what your really need and want, then activate those object only.
Good Luck,
Brian

Similar Messages

  • Best Practice for setting systems up in SMSY

    Good afternoon - I want to cleanup our SMSY information and I am looking for some best practice advice on this. We started with an ERP 6.0 dual-stack system. So I created a logical component Z_ECC under "SAP ERP" --> "SAP ECC Server" and I assigned all of my various instances (Dev, QA, Train, Prod) to this logical component. We then applied Enhancement Package 4 to these systems. I see under logical components there is an entry for "SAP ERP ENHANCE PACKAGE". Now that we are on EhP4, should I create a different logical component for my ERP 6.0 EhP4 systems? I see in logical components under "SAP ERP ENHANCE PACKAGE" there are entries for the different products that can be updated to EhP4, such as "ABAP Technology for ERP EHP4", "Central Applications", ... "Utilities/Waste&Recycl./Telco". If I am supposed to change the logical component to something based on EhP4, which should I choose?
    The reason that this is important is that when I go to Maintenance Optimizer, I need to ensure that my version information is correct so that I am presented with all of the available patches for the parts that I have installed.
    My Solution Manager system is 7.01 SPS 26. The ERP systems are ECC 6.0 EhP4 SPS 7.
    Any assistance is appreciated!
    Regards,
    Blair Towe

    Hello Blair,
    In this case you have to assign products EHP 4 for ERP 6 and SAP ERP 6 for your system in SMSY.
    You will then have 2 entries in SMSY, one under each product, the main instance for EHP 4 for ERP 6 must be central applications and the one for SAP ERP 6 is SAP ECC SERVER.
    This way your system should be correctly configured to use the MOPZ.
    Unfortunately I'm not aware of a guide explaining these details.
    Some times the System Landscape guide at service.sap.com/diagnostics can be very useful. See also note 987835.
    Hope it can help.
    Regards,
    Daniel.
    Edited by: Daniel Nicol on May 24, 2011 10:36 PM

  • Translation patterns - best practice

    We have 300 DIDs from our telco.    Currently, only 150 are in use.   If a call comes thru for a non-asigned number, I would like to set-up a call handler that states the number is a non-working number that belongs to the company and then give options for contacting the correct person.      Also, when a person leaves the company I am currenly forawarding the number to the operator but I would also like to make these numbers part of the call handler.
    My question is this - what is the best way to set this up?    I currently am removing the number from the directory numbers and setting up a translation pattern to point the number to an end point such as the operator.    Is this the best thing to do?    I would like to know what is considered to be "best practice" in keeping the phone system as clean as possible.
    I appreciate any input.
    Pat

    I would setup a catch-all scenario with a translation to a CTIRP that would forward to VM and hit the Call handler you desire.  For example if you had the DIDs 212-555-1000 thru 212-555-1299 i would first setup a non-DID CTI RP that matches your call handler dtmf (e.g. 7999 if you use 4 digit extensions).  the CTI RP for 7999 would forward to VM and then the Call Handler with DTMF of 7999 would play your message that number is not in use.
    Then setup a translation for 212-555-1[012]xx that translates to 7999.
    This wildcard match would not route the call if there was a more specific match present within the Calling Search Space for the Gateway.  So if extension 1050 was present it would route to that phone, but if extension 1051 was a terminated or unused number it would not be present and therefore the call would hit the translation and be routed to the "number not in use" call handler.
    I think this is what you are after, a way to minimize the translations and not have to keep track of individual numbers.  Of course modify the length of the translations if you are not routing based on 10 digits.

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Add fields in transformations in BI 7 (best practice)?

    Hi Experts,
    I have a question regarding transformation of data in BI 7.0.
    Task:
    Add new fields in a second level DSO, based on some manipulation of first level DSO data. In 3.5 we would have used a start routine to manipulate and append the new fields to the structure.
    Possible solutions:
    1) Add the new fields to first level DSO as well (empty)
    - Pro: Simple, easy to understand
    - Con: Disc space consuming, performance degrading when writing to first level DSO
    2) Use routines in the field mapping
    - Pro: Simple
    - Con: Hard to performance optimize (we could of course fill an internal table in the start routine and then read from this to get some performance optimization, but the solution would be more complex).
    3) Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine).
    Does anybody know what is best practice is? Or do you have any experience regarding what you see as the best solution?
    Thank you in advance,
    Mikael

    Hi Mikael.
    I like the 3rd option and have used this many many times.  In answer to your question:-
    Update the fields in the End routine
    - Pro: Simple, easy to understand, can be performance optimized  - Yes have read and tested this that it works faster.  A OSS consulting note is out there indicating the speed of the end routine.
    - Con: We need to ensure that the data we need also exists (i.e. if we have one field in DSO 1 that we only use to calculate a field in DSO 2, this would also have to be mapped to DSO 2 in order to exist in the routine). - Yes but by using the result package, the manipulation can be done easily.
    Hope it helps.
    Thanks,
    Pom

  • Temp Tables - Best Practice

    Hello,
    I have a customer who uses temp tables all over their application.
    This customer is a novice and the app has its roots in VB6. We are converting it to .net
    I would really like to know the best practice for using temp tables.
    I have seen code like this in the app.
    CR2.Database.Tables.Item(1).Location = "tempdb.dbo.[##Scott_xwPaySheetDtlForN]"
    That seems to work, though i do not know why the full tempdb.dbo.[## is required.
    However, when i use this in the new report I am doing I get runtime errors.
    i also tried this
    CR2.Database.Tables.Item(1).Location = "##Scott_xwPaySheetDtlForN"
    I did not get errors, but I was returned data i did not expect.
    Before i delve into different ways to do this, i could use some help with a good pattern to use.
    thanks

    Hi Scott,
    Are you using the RDC still? It's not clear but looks like it.
    We had an API that could piggy back the HDBC handle in the RDC ( craxdrt.dll ) but that API is no longer available in .NET. Also, the RDC is not supported in .NET since .NET uses the framework and RDC is COM.
    Work around is to copy the temp data into a data set and then set location to the data set. There is no way that I know of to get to the tempdb from .NET. Reason being is there is no CR API to set the owner of the table to the user, MS SQL Server locks the tempdb to that user has exclusinve rights on it.
    Thank you
    Don

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best practice on sqlite for games?

    Hi Everyone, I'm new to building games/apps, so I apologize if this question is redundant...
    I am developing a couple games for Android/iOS, and was initially using a regular (un-encrypted) sqlite database. I need to populate the database with a lot of info for the games, such as levels, store items, etc. Originally, I was creating the database with SQL Manager (Firefox) and then when I install a game on a device, it would copy that pre-populated database to the device. However, if someone was able to access that app's database, they could feasibly add unlimited coins to their account, unlock every level, etc.
    So I have a few questions:
    First, can someone access that data in an APK/IPA app once downloaded from the app store, or is the method I've been using above secure and good practice?
    Second, is the best solution to go with an encrypted database? I know Adobe Air has the built-in support for that, and I have the perfect article on how to create it (Ten tips for building better Adobe AIR applications | Adobe Developer Connection) but I would like the expert community opinion on this.
    Now, if the answer is to go with encrypted, that's great - but, in doing so, is it possible to still use the copy function at the beginning or do I need to include all of the script to create the database tables and then populate them with everything? That will be quite a bit of script to handle the initial setup, and if the user was to abandon the app halfway through that population, it might mess things up.
    Any thoughts / best practice / recommendations are very appreciated. Thank you!

    I'll just post my own reply to this.
    What I ended up doing, was creating the script that self-creates the database and then populates the tables (as unencrypted... the encryption portion is commented out until store publishing). It's a tremendous amount of code, completely repetitive with the exception of the values I'm entering, but you can't do an insert loop or multi-line insert statement in AIR's SQLite so the best move is to create everything line by line.
    This creates the database, and since it's not encrypted, it can be tested using Firefox's SQLite manager or some other database program. Once you're ready for deployment to the app stores, you simply modify the above set to use encryption instead of the unencrypted method used for testing.
    So far this has worked best for me. If anyone needs some example code, let me know and I can post it.

  • Best Practice Table Creation for Multiple Customers, Weekly/Monthly Sales Data in Multiple Fields

    We have an homegrown Access database originally designed in 2000 that now has an SQL back-end.  The database has not yet been converted to a higher format such as Access 2007 since at least 2 users are still on Access 2003.  It is fine if suggestions
    will only work with Access 2007 or higher.
    I'm trying to determine if our database is the best place to do this or if we should look at another solution.  We have thousands of products each with a single identifier.  There are customers who provide us regular sales reporting for what was
    sold in a given time period -- weekly, monthly, quarterly, yearly time periods being most important.  This reporting may or may not include all of our product identifiers.  The reporting is typically based on calendar-defined timing although we have
    some customers who have their own calendars which may not align to a calendar month or calendar year so recording the time period can be helpful.
    Each customer's sales report can contain anything from 1,000-20,000 rows of products for each report.  Each customer report is different and they typically have between 4-30 columns of data for each product; headers are consistently named.  The
    product identifiers included may vary by customer and even within each report for a customer; the data in the product identifier row changes each week.  Headers include a wide variety of data such as overall on hand, overall on order, unsellable on hand,
    returns, on hand information for each location or customer grouping, sell-through units information for each location or customer grouping for that given time period, sell-through dollars information for each location or customer grouping for that given time
    period,  sell-through units information for each location or customer grouping for a cumulative time period (same thing for dollars), warehouse on hands, warehouse on orders, the customer's unique categorization of our product in their system, the customer's
    current status code for that product, and so on.
    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables for our largest customers so I can create queries and pivot tables to more quickly look at sales-related information by category, by specific product(s), by partner, by specific products or categories across partners, by specific products or
    categories across specific weeks/months/years, etc.  We do have a separate product table so only the product identifier or a junction table may be needed to pull in additional information from the product table with queries.  We do need to maintain
    the sales reporting information indefinitely.
    I welcome any suggestions, best practice or resources (books, web, etc).
    Many thanks!

    Currently all of this data is stored in a multitude of Excel spreadsheets (by customer, division and time period).  Due to overall volume of information and number of Excel sheets, cross-referencing can take considerable time.  Is it possible to
    set-up tables .....
    I assume you want to migrate to SQL Server.
    Your best course of action is to hire a professional database designer for a short period like a month.
    Once you have the database, you need to hire a professional DBA to move your current data from Access & Excel into the new SQL Server database.
    Finally you have to hire an SSRS professional to design reports for your company.
    It is also beneficial if the above professionals train your staff while building the new RDBMS.
    Certain senior SQL Server professionals may be able to do all 3 functions in one person: db design, database administration/ETL & business intelligence development (reports).
    Kalman Toth Database & OLAP Architect
    SELECT Video Tutorials 4 Hours
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • Best Practice to fetch SQL Server data and Insert into Oracle Tables

    Hello,
    I want to read sqlserver data everry half an hour and write into oracle tables ( in two different databases). What is the best practice for doing this?
    We do not have any database dblinks from oracle to sqlserver and vice versa.
    Any help is highly appreciable?
    Thanks

    Well, that's easy:
    use a TimerTask to do the following every half an hour:
    - open a connection to sql server
    - open two connections to the oracle databases
    - for each row you read from the sql server, do the inserts into the oracle databases
    - commit
    - close all connections

  • Best Practice for Image placement and Anchored Frames for use in Robohelp 9

    Hi,
    I'm looking for the best practices in how to layout my images in Framemaker 10 so that they translate correctly to Robohelp 9.  I currently have images inside of Anchored frames that "Run into" the right side of my text. I've adjusted the size of the anchored frame so that my text flows correctly around the image. Everything looks good in Framemaker! Yeah! The problem is that when I link my Framemaker document to Robohelp, the text does not flow around my image in the same manner. On a couple of Robohelp screens the image is running into the footer. I'm wondering if I should be using tables in Framemaker in order to get the page layout that I'm looking for. Also, I went back and forth...is this a Framemaker question or is this a Robohelp question. Any assistance would be greatly appreciated.

    I think Jeff is meaning this section of the RoboHelp forums:
    http://forums.adobe.com/community/robohelp/robohelp_framemaker

  • Best practice for a same query against 2 different tables

    Hello all,
    I want to extract info about tablespaces storage, both permanent and temporary. For that I use 2 different cursors that do exactly the same query but against a different table (dba_data_files and dba_temp_files).
    CURSOR permanentTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_data_files
    WHERE tablespace_name = tablespaceName;
    CURSOR temporaryTBSStorageInfo (tablespaceName VARCHAR2) IS
    SELECT file_name, bytes, autoextensible, maxbytes, increment_by
    FROM dba_temp_files
    WHERE tablespace_name = tablespaceName;
    First I'm bothered that I have to use 2 cursors to execute the same query against 2 different tables. Is there no another way around?
    Then I fetch the results of this cursors in 2 different loops because I didn't find a way to dynamically call the cursors. I am looking for best practice here, knowing that I will do the same parsing against the results of the 2 cursors.
    Thank you,

    Hi
    Check whether the below query is helpful or not
    select      fs.tablespace_name "Tablespace",
         fs.tempspace "Temp MB",
         df.totalspace "Total MB"
         from
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) TotalSpace
         from
         dba_data_files
         group by
         tablespace_name
         ) df,
         (select
         tablespace_name,
         round(sum(bytes) / 1048576) tempSpace
         from
         dba_temp_files
         group by
         tablespace_name
         ) fs
         where
         df.tablespace_name = fs.tablespace_name;
    Thanks

  • Best practice for if/else when one outcome results in exit [Bash]

    I have a bash script with a lot of if/else constructs in the form of
    if <condition>
    then
    <do stuff>
    else
    <do other stuff>
    exit
    fi
    This could also be structured as
    if ! <condition>
    then
    <do other stuff>
    exit
    fi
    <do stuff>
    The first one seems more structured, because it explicitly associates <do stuff> with the condition.  But the second one seems more logical because it avoids explicitly making a choice (then/else) that doesn't really need to be made.
    Is one of the two more in line with "best practice" from pure bash or general programming perspectives?

    I'm not sure if there are 'formal' best practices, but I tend to use the latter form when (and only when) it is some sort of error checking.
    Essentially, this would be when <do stuff> was more of the main purpose of the script, or at least that neighborhood of the script, while <do other stuff> was mostly cleaning up before exiting.
    I suppose more generally, it could relate to the size of the code blocks.  You wouldn't want a long involved <do stuff> section after which a reader would see an "else" and think 'WTF, else what?'.  So, perhaps if there is a substantial disparity in the lengths of the two conditional blocks, put the short one first.
    But I'm just making this all up from my own preferences and intuition.
    When nested this becomes more obvious, and/or a bigger issue.  Consider two scripts:
    if [[ test1 ]]
    then
    if [[ test2 ]]
    then
    echo "All tests passed, continuing..."
    else
    echo "failed test 2"
    exit
    fi
    else
    echo "failed test 1"
    fi
    if [[ ! test1 ]]
    then
    echo "failed test 1"
    exit
    fi
    if [[ ! test2 ]]
    then
    echo "failed test 2"
    exit
    fi
    echo "passed all tests, continuing..."
    This just gets far worse with deeper levels of nesting.  The second seems much cleaner.  In reality though I'd go even further to
    [[ ! test1 ]] && echo "failed test 1" && exit
    [[ ! test2 ]] && echo "failed test 2" && exit
    echo "passed all tests, continuing..."
    edit: added test1/test2 examples.
    Last edited by Trilby (2012-06-19 02:27:48)

Maybe you are looking for