Best Practice - Data load

Hi,
what is the BEST PRACTICE to migrate master data on a standalone CRM ?
any advice will be highly appreciated.
Cheers
Guest

Hi,
Please read the following threads carefully and you will understand the best method by yourself.
Initial Load on standalone CRM
Inital load on standalone CRM
CRM Master Data
<b>Reward if helps</b>,
Regards,
Paul Kondaveeti

Similar Messages

  • BPC:NW - Best practices to load Transaction data from ECC to BW

    I have a very basic question for loading GL transaction data into BPC for variety of purposes, would be great if you can point me towards best practices/standard ways of making such interfaces.
    1. For Planning
    When we are doing the planning for cost center expenses and need to make variance reports against the budgets, what would be the source Infocube/DSO for loading the data from ECC via BW, if the application is -
    YTD entry mode:
    Periodic entry mode:
    What difference it makes to use 0FI_GL_12 data source or using 0FIGL_C10 cube or 0FLGL_O14 or 0FIGL_D40 DSOs.
    Based on the data entry mode of planning application, what is the best way to make use of 0balance or debit_credit key figures on the BI side.
    2. For consolidation:
    Since we need to have trading partner, what are the best practices for loading the actual data from ECC.
    What are the typical mappings to be maintained for movement type with flow dimensions.
    I have seen multiple threads with different responses but I am looking for the best practices and what scenarios you are using to load such transactions from OLTP system. I will really appreciate if you can provide some functional insight in such scenarios.
    Thanks in advance.
    -SM

    For - planning , please take a look at SAP Extended Financial Planning rapid-deployment solution:  G/L Financial Planning module.   This RDS captures best practice integration from BPC 10 NW to SAP G/L.  This RDS (including content and documentation) is free to licensed customers of SAP BPC.   This RDS leverages the 0FIGL_C10 cube mentioned above.
      https://service.sap.com/public/rds-epm-planning
    For consolidation, please take a look at SAP Financial Close & Disclosure Management rapid-deployment solution.   This RDS captures best practice integration from BPC 10 NW to SAP G/L.  This RDS (including content and documentation) is also free to licensed customers of SAP BPC.
    https://service.sap.com/public/rds-epm-fcdm
    Note:  You will require an SAP ServiceMarketplace ID (S-ID) to download the underlying SAP RDS content and documentation.
    The documentation of RDS will discuss the how/why of best practice integration.  You can also contact me direct at [email protected] for consultation.
    We are also in the process of rolling out the updated 2015 free training on these two RDS.  Please register at this link and you will be sent an invite.
    https://www.surveymonkey.com/s/878J92K
    If the link is inactive at some point after this post, please contact [email protected]

  • Best practices for loading apo planning book data to cube for reporting

    Hi,
    I would like to know whether there are any Best practices for loading apo planning book data to cube for reporting.
    I have seen 2 types of Design:
    1) The Planning Book Extractor data is Loaded first to the APO BW system within a Cube, and then get transferred to the Actual BW system. Reports are run from the Actual BW system cube.
    2) The Planning Book Extractor data is loaded directly to a cube within the Actual BW system.
    We do these data loads during evening hours once in a day.
    Rgds
    Gk

    Hi GK,
    What I have normally seen is:
    1) Data would be extracted from APO Planning Area to APO Cube (FOR BACKUP purpose). Weekly or monthly, depending on how much data change you expect, or how critical it is for business. Backups are mostly monthly for DP.
    2) Data extracted from APO planning area directly to DSO of staging layer in BW, and then to BW cubes, for reporting.
    For DP monthly, SNP daily
    You can also use the option 1 that you mentioned below. In this case, the APO cube is the backup cube, while the BW cube is the one that you could use for reporting, and this BW cube gets data from APO cube.
    Benefit in this case is that we have to extract data from Planning Area only once. So, planning area is available for jobs/users for more time. However, backup and reporting extraction are getting mixed in this case, so issues in the flow could impact both the backup and the reporting. We have used this scenario recently, and yet to see the full impact.
    Thanks - Pawan

  • How to load best practices data into CRM4.0 installation

    Hi,
      We have successfully installed CRM4.0 on a lab system and now would like to install the CRM best practice data into it.
      If I refer to the CRM BP help site http://help.sap.com/bp_crmv340/CRM_DE/index.htm,
    It looks like I need to install at least the following In order to run it properly.
    C73: CRM Essential Information 
    B01: CRM Generation 
    C71: CRM Connectivity 
    B09: CRM Replication 
    C10: CRM Master Data 
    B08: CRM Cross-Topic Functions
    I am not sure where to start and where to end. At the minimum level I need the CRM Sales to start with.
    Do we have just one installation CDs or a number of those, Also are those available in the download area of the service.sap.com?
    Appreciate the response.

    <b>Ofcourse</b> you need to install Best Practices Configuration, or do your own config.
    Simply installing CRM 4.0 from the distibutiond CD\DVD will get you a plain vanilla CRM system with no configuration and obviously no data.  The Best Practices guide you trhough the process of configuring CRM, and even has automated some tasks.  If you use some of the CATT processes of the Best Practices you can even populate data in your new system (BP data, or replace the input files with your own data)
    In 12 years of SAP consulting, I have NEVER come across a situation whereby you simply install SAP from the distribution media, and can start using it without ANY configuration.
    My advise is to work throught the base configuration modules first, either by importing the BP config/data or following the manual instruction to create the config/data yourself.  Next, look at what your usage of CRM is going to be, for example Internet Sales, Service Management, et cetera, and then install the config  for this/these modules.

  • Best Practices for Loading Master Data via a Process Chain

    Currently, we load attributes, text, and hierarchies before loading the transactional data.  We have one meta chain.  To load the master data it is taking more than 2 hours.  Most of the master data is full loads.  We've noticed that a lot of the master data, especially text, has not changed or changed very little since we implemented 18 months ago.  Is there a precedence or best practice to follow such as do we remove these processes from the chain?  If so, how often should it be run?  We would really like to reduce the amount of the master data loading time.  Is there any documentation that I can refer to?  What are other organizations doing to reduce the amount of time to load master data?
    Thanks!
    Debby

    Hi Debby,
    I assume you're loading Master Data from a BI system? The forum here are related to SAP NetWeaver MDM, so maybe you should ask this question in a BI forum?
    Nevertheless, if your data isn't changed this much, maybe you could use a delta mechanism for extraction? This would send only the changed records and not all the unchanged all the time. But this depends on your master data and of course on your extractors.
    Cheers
    Michael

  • Best Practices for Loading Data in 0SD_C03

    Hi, Guru, I want to know which is the best practice to have information about Sales, billing, delivery. I know it has this Datasources.
    • Sales Order Item Data - 2LIS_11_VAITM
    • Billing Document Data: Items - 2LIS_13_VDITM
    • Billing Document Header Data - 2LIS_13_VDHDR
    • Sales-Shipping: Allocation Item Data - 2LIS_11_V_ITM
    • Delivery Header Data - 2LIS_12_VCHDR
    • Delivery Item Data - 2LIS_12_VCITM
    • Sales Order Header Data - 2LIS_11_VAHDR
    Do I have to load all this Datasource to Infocube 0SD_C03 or I have to create copy of 0SD_C03 to mach with each Datasources.

    Hi.
        If you just want to statistic the amount or quantity of the sales process,I suppose you to create 3 cubes and then use a multi provider to integrated those 3 cubes you created.for example:
        2LIS_11_VAITM  -> ZSD_C01
        2LIS_12_VCITM -> ZSD_C02
        2LIS_13_VDITM -> ZSD_C03
    In this scenario,you can enhance the 2lis_12_vcitm and 2lis_13_vditm with sales order data,such as request delivery date etc..and then create a Multiprovider such as ZSD_M01.
    Best Regards
    Martin Xie

  • Best practice to Load FX rates to Rate Application in SAP BPC 7.5 NW

    Hi,
    What is the best practice/approach to load FX rates to Rate Application in SAP BPC 7.5 NW? Is it from ECC or BW?
    Thanks,
    Rushi

    I have seen both cases.
    1) Rates coming as a flat file from external system, treasury department, and ECC and BPC both loads in to respective systems in batch.
    2) ECC pushes rate info to BW and data in turn get pushed to BPC along with other scheduled process chains.
    How are rates entering your ECC?
    Shilpa

  • Upcoming SAP Best Practices Data Migration Training - Chicago

    YOU ARE INVITED TO ATTEND HANDS-ON TRAINING
    SAP America, Downers Grove in Chicago, IL:
    November 3 u2013 5, 2010     `
    Installation and Deployment of SAP Best Practices for Data Migration & SAP BusinessObjects Data Services
    Install and learn how to use the latest SAP Best Practices for Data Migration package. This new package combines the familiar IDoc technology together with the SAP BusinessObjects (SBOP) Data Services to load your customeru2019s legacy data to SAP ERP and SAP CRM (New!).
    Agenda
    At the end of this unique hands-on session, participants will depart with the SBOP Data Services and SAP Best Practices for Data Migration installed on their own laptops. The three-day training course will cover all aspects of the data migration package including:
    1.     Offering Overview  u2013 Introduction to the new SAP Best Practices for Data Migration package and data migration content designed for SAP BAiO / SAP ERP and SAP CRM
    2.     Data Services fundamentals u2013 Architecture, source and target metadata definition. Process of creating batch Jobs, validating, tracing, debugging, and data assessment.
    3.     Installation and configuration of the SBOP Data Servicesu2013 Installation and deployment of the Data Services and content from SAP Best Practices. Configuration of your target SAP environment and deploying the Migration Services application.
    4.     Customer Master example u2013 Demonstrations and hands-on exercises on migrating an object from a legacy source application through to the target SAP application.
    5.     Overview of Data Quality within the Data Migration process A demonstration of the Data Quality functionality available to partners using the full Data Services toolset as an extension to the Data Services license.
    Logistics & How to Register
    Nov. 3 u2013 5: SAP America, Downers Grove,  IL
                     Wednesday 10AM u2013 5PM
                     Thursday 9AM u2013 5PM
                     Friday 8AM u2013 3PM
                     Address:
                     SAP America u2013Buckingham Room
                     3010 Highland Parkway
                     Downers Grove, IL USA 60515
    Partner Requirements:  All participants must bring their own laptop to install SAP Business Objects Data Services on it. Please see attached laptop specifications and ensure your laptop meets these requirements.
    Cost: Partner registration is free of charge
    Who should attend: Partner team members responsible for customer data migration activities, or for delivery of implementation tools for SAP Business All-in-One solutions. Ideal candidates are:
    u2022         Data Migration consultant and IDoc experts involved in data migration and integration projects
    u2022         Functional experts that perform mapping activities for data migration
    u2022         ABAP developers who write load programs for data migration
    Trainers
    Oren Shatil u2013 SAP Business All-in-One Development
    Frank Densborn u2013 SAP Business All-in-One Development
    To register please use the hyperlink below.
    http://service.sap.com/~sapidb/011000358700000917382010E

    Hello,
    The link does not work. This training is still available ?
    Regards,
    Romuald

  • Upcoming SAP Best Practices Data Migration Training - Berlin

    YOU ARE INVITED TO ATTEND HANDS-ON TRAINING
    Berlin, Germany: October 06 u2013 08, 2010     `
    Installation and Deployment of SAP Best Practices for Data Migration & SAP BusinessObjects Data Integrator
    Install and learn how to use the latest SAP Best Practices for Data Migration package. This new package combines the familiar IDoc technology together with the SAP BusinessObjects (SBOP) Data Integrator to load your customeru2019s legacy data to SAP ERP and SAP CRM (New!).
    Agenda
    At the end of this unique hands-on session, participants will depart with the SBOP Data Integrator and SAP Best Practices for Data Migration installed on their own laptops. The three-day training course will cover all aspects of the data migration package including:
    1.     Offering Overview  u2013 Introduction to the new SAP Best Practices for Data Migration package and data migration content designed for SAP BAiO / SAP ERP and SAP CRM
    2.     Data Integrator fundamentals u2013 Architecture, source and target metadata definition. Process of creating batch Jobs, validating, tracing, debugging, and data assessment.
    3.     Installation and configuration of the SBOP Data Integratoru2013 Installation and deployment of the Data Integrator and content from SAP Best Practices. Configuration of your target SAP environment and deploying the Migration Services application.
    4.     Customer Master example u2013 Demonstrations and hands-on exercises on migrating an object from a legacy source application through to the target SAP application.
    Logistics & How to Register
    October 06 u2013 08: Berlin, Germany
                     Wednesday 10AM u2013 5PM
                     Thursday 9AM u2013 5PM
                     Friday 9AM u2013 4PM
                     SAP Deutschland AG & Co. KG
                     Rosenthaler Strasse 30
                     D-10178 Berlin, Germany
                     Training room S5 (1st floor)
    Partner Requirements:  All participants must bring their own laptop to install SAP Business Objects Data Integrator on it. Please see attached laptop specifications and ensure your laptop meets these requirements.
    Cost: Partner registration is free of charge
    Who should attend: Partner team members responsible for customer data migration activities, or for delivery of implementation tools for SAP Business All-in-One solutions. Ideal candidates are:
    u2022         Data Migration consultant and IDoc experts involved in data migration and integration projects
    u2022         Functional experts that perform mapping activities for data migration
    u2022         ABAP developers who write load programs for data migration
    Trainers
    Oren Shatil u2013 SAP Business All-in-One Development
    Frank Densborn u2013 SAP Business All-in-One Development
    To register please follow the hyperlink below
    http://intranet.sap.com/~sapidb/011000358700000940832010E

    Hello,
    The link does not work. This training is still available ?
    Regards,
    Romuald

  • How to best reduce data load on MAC due to duplicate Adobe files?

    I just got hired at a small business. I don't have a lot of experience with MACs, so I need to know some best practices here.
    I am working with CS3, Ai, Ps, Id, and later, Dw.
    It's a magazine publishing company. I have it organizing so each magazine has its folder, and I want to have an "old editions" and a "working edition" folders. Within each, I want to break it down into "Ads this issue", "Links", and "stories".
    The Ads and Links are where I'm concerned. I want to have a copy of each ad's file within that folder, and a copy of all the other files its linked to, so that if the original ads/images get moved, the links won't be disturbed.
    I'm wondering if there is a way to do this without bogging down the machine's HD with duplicates of really large files. The machine moves slow enough as it is.
    I've theorized that I could:
    A) keep the Main "Ads" folder along with the subfolders compressed, and the "old editions" compressed, and have a regular copy in the working folder only. This also works because the ads get edited for different editions sometimes.
    or
    B) Is there a way to do this with Aliases? Being unfamiliar with alias, or even shortcuts, because I haven't worked in an actual production environment yet, I don't know they functionality of linking alias into an ID file. I read a couple of previous posts and the outlook isn't very good for it.
    or
    C) Just place a PDF (or whatever you guys think is the best quality preserving filetype) in with the magazine itself? Then each company could have its own ad folder with all the rest of the files...
    What do you all think? If you can even link me to a post that goes into further detail on which option you think is best, or if  you have a different solution, that would be wonderful. I am open to answers.
    I want to be sure to leave a cleaner computer/work environment then the last few punks who were here... That's my "best practice". Documentation and file organization got drilled into me at Uni.

    Sorry, I am overcaffienated today, this response is kind of long.
    "Data load?" Do you mean that:
    a) handling lots of large files is too much for your computer to handle, or
    b) simply having lots of large files on your hard drive (even if they are not currently in use) slows your computer down?
    Because b) is pretty much impossible, unless you are almost out of space on your system drive. Which can be ameliorated by... buying another drive.
    I once set up an install of InDesign on a Mac for a friend of mine who is chipping away at a big-data math PhD. and who is sick to death of LaTeX. (Can't blame her, really.) Because we are both BSD nerds from way back, she wanted to do what you are suggesting - but instead of thinking about aliases, which you are correct to regard with dubiousness, she wanted to do it with hardlinks. Which worked, more or less. She liked it. Seemed like overkill to me.
    I suspect that this is because she is a highfalutin' academic whereas I am a production wonk in a business. I have to compare the cost of my time resolving a broken-link issue due to a complicated archiving scheme versus Just Buying Another Drive. Having clocked myself on solving problems induced by complicated archival schemes (or failure of overworked project managers to correctly follow the rules for same) I know that it doesn't take many hours of my work invested in combing through archives or rebuilding lost image files to equal Another Drive that I can go out and Just Buy.
    If you set up a reasonable method of file organization, and document it clearly, then you have already saved your organization (and your successors!) significant amounts of time and cash. Hard drive space is cheap. Don't spend your time on figuring out a way to save a few terabytes here and there. In fact, what I'd suggest for you is to try to figure out how many terabytes you've already spent on this question, by figuring out todays ratio of easily purchaseable reliable external hard drives to your unit of preferred currency, then figuring out how many hours you've already spent on the question.
    The only reason I can make this argument is that price-per-unit-of-magnetic-data-storage has, with remarkablly few exceptions, been constantly plummeting for decades, while the space requirements for documentation have been going up comparatively slowly. If you need a faster computer to do your job more efficiently, then price out a SSD drive for your OS and applications and jobs-on-deck, and then show your higher-ups the math that proves that the SSD pays for itself in your saved time within n weeks. My gut feeling these days is that, unless you are seriously underpaid, n is between two and six.
    Finally: I didn't really address your suggested possibilities. Procedure C (placing PDFs) usually works, but you do need to figure out how to make PDFs in such a way as to ensure they play nicely with your print method. Procedure A (compress stuff you don't need anymore) probably works okay, but I hope that you have some sort of command-line scripting ability to be able to quickly route stuff into and out of archives.

  • Best Practice: Data Binding

    Each non-trivial Swing application needs to bind GUI elements (windows and components) to data. Ten years back in my MFC times, the typical solution was to initialize the GUI with fixed data, show the window, and read back the modified data after window closure. But in Swing, there is more flexibility (data can be dynamic for example). But what is the best way to deal with data binding?
    - Writing custom models, bound directly to data objects?
    - Initializing once and read back after window closure?
    - Using the non-standard (but rather interesting) beans binding API?
    Certainly there are more solutions, but my intention is not to write a list of possibilities or to get your opinion on one of these solutions. Actually I'd like to know from the experienced Swing pros in this forum, whether there is a best practice for data binding?

    Hi,
    I'd say it depends on what kind of data are you binding and if they're refreshed or not. In our "framework" we're using Properties-based approach for dialogs - server packs an @Entity into class similar to Properties - basically it's s Map<String, Object>, sends this Properties to client and values are put into our customised components. We're using component name as a key, most of controls are simple extends of standard JComponents with handful of methods from common interface (like load(), save(), ...). This approach seems to work fine.
    For refreshed "lists" we're using again our custom "framework" based on sending initial batch and consequent diffs - there is sort of event queue responsible for telling everyone what had been changed. Because this kind of data tends to be quite huge, we're sending gzipped binary representation of source data.
    With beans there might be a way how to implement both simply by calling getters, but then every single value results in one roundtrip, which was absolutely unacceptable for us. But I can imaging this working fine on LAN.

  • Best practice for load balancing on SA540

    Are there some 'best practice' guide to configure out load balancing on SA540 .?
    I've got 2 ADSL lines and would like device to auto manage outgoing traffic .. Any idea ?
    Regards

    Hi,
    SA500 today implements flow based round robin load balancing scheme.
    In the case of two WAN link (over ADSL), by default, the traffic should be "roughly" equally distributed.
    So in general, users should have no need to configure anything further for load balancing.
    The SA500 also supports protocol binding (~PBR) over WAN links. This mechanism offers more control on how traffic can flow.
    For example, if you have 1 ADSL with higher throughput than the other ADSL link offers, you can consider to bind bandwidth-hungry app on the WAN link connecting to the higher ADSL link and the less bandwidth-hungary app on the other one. The other traffic can continue to do round robin.  This way you won't saturate the low bandwidth link and give users better application experiences.
    Regards,
    Richard

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best practices for loading swf's

    Greetings,
    Using CS5 AS2
    I'm creating a website in flash (all the files will be in one directory/folder on SharePoint) and want to make sure that what seems to be working fine is best practice.
    I have an index.swf with many buttons which will take the user to landing pages/content/other swfs. On these different buttons I have the script...
    on (release) {loadMovieNum("name.swf", 0);}                I could also do just {loadMovie("name.swf", 0);} ??
    The movie transitions nicely to name.swf and on this page I have a button that returns the user to the index.swf...
    on (release) {loadMovieNum("index.swf", 0);}   Things move back to index.swf nicely and user can chose to go to another landing page.
    It looks like I'm on the right track, bc nothing is going awry? but want to check. Am I following best practices for moving from one swf to another within a website?
    Thanks for help or confirmation!!

    loading into _level0 (and you should use loadMovieNum, not loadMovie, when loading into a level) undermines some of the benefits of a flash application:  all the assets must load with each display change and the user sees a flash instead of appearing to transition seamlessly from one display to the next.

  • Question - Best practice data source for Vs2008 and Crystal Reports 2008

    I have posted a question here
    CR2008 using data from .NET data provider (ADO.NET DATASET from a .DLL)
    but think that perhaps I need general community advise on best practice with data sources.
    In Crystal reports I can choose the data source location from any number of connection types, eg ado.net(xml), com, oledb, odbc.
    Now in regard to the post, the reports have all been created in Crxi 6.3, upgraded to Crystal XI and now Im using the latest and greatest. I wrote the Crystal Reports 6.3/ XI reports back in the day to do the following: The Reports use a function from COM Object which returns an ADO recordset which is then consumed fine.
    So I don't want to rewrite all these reports, of which there are many.
    I would like to know if any developers are actually using .NET Class libraries to return ADO.NET datasets via the method call or if you are connecting directly to XML data via whatever source ( disk, web service, http request etc).
    I have not been able to eliminate the problem listed in the post mentioned above, which is that the Crystal Report is calling the .NET class library method twice before displaying the data. I have confirmed this by debugging the class lib.
    So any guidance or tips is appreciated.
    Thanks

    This is already being discuss in one of your other threads. Let's close this one out and concentrate on the one I've already replied to.
    Thanks

Maybe you are looking for

  • Sounds "sticking" in games and a question for Creat

    I'm not sure if this is the best place to do it but I'd like to kind of formally briing forward a bug in games that's not X-Fi dependent. I've had this happen with my Audigy 2 ZS and my Fatalty. I know this isn't anyone's fault but it's still quite a

  • Background Colour Incorrect when exporting to PDF from Crystal Reports 2008

    Hello All When exporting to PDF from CR2008 the background colour of the crosstab is incorrectly displayed. The problem is as follows to create a crosstab with a grey header row and column details in white we have used the following trick. Format Cro

  • Essbase Studio

    Hi friends, I need some help in creating cube using essbase studio I have data in the following format : Month , Employee , Region , Activity , Activity dates Jan , ABC , CA , Intro_date , 01/01/2011 Feb , ABC , CA , offer_date , 02/01/2011 I want to

  • Extract Single Tone Information.vi configuration help

    Hello. I am trying to measure speed from a sensor that outputs the speed directly to frequency (i.e. 500 Hz = 500 RPM). I am using the Scaled Window.vi with Hanning and the Extract Single Tone Information.vi to get the frequency. I successfully measu

  • Playlists by artists

    is there a script to have smart playlists automatically created by artist name ? I transferred 30 go of songs from my friend's PB wihout the playlists and now I do not want to create them manually;