Data Load -- Best Practices Analytics 9.0.1

We are currently implementing Essbase I would be interested in feedback concerning data load practices. <BR><BR>We have a front end system which delivers live operational type data in a sql database. Currently, I use Access to run queries against the data to load into Enterprise, but I would like to move to an automated, daily load for Essbase. At this point in Essbase, I have several load rules that I apply to Excel files which were exported from Access (not a good solution). I would assume that a better answer would be a SQL load, but I wonder how others typically go about loading information. What about loading financial data consolidated in another system (Enterprise)?<BR><BR>Thanks for any feedback,<BR><BR>Chris

Wanted to give an update of my progress today.
I again began with a clean installation of 9.0.0.  Brought up the CF administrator and completed the installation.  From there, I went directly to installing the 9.0.1 update and the 9.0.1 hotfix.  To my amazement, the cf administrator came up with an issue. But . . .
I then went into the administrator to install my 'customizations' (i.e. my datasources, my SMTP mail server, my custom tags, etc).  Truly nothing unusual.  Almost sad to say - vanilla.  I then shut down the service as recommended to have some of the changes 'take effect'.  Boom, the cf administrator no longer appears but gives me the blank screen and the same error messages I have listed in my first note.  So again, it must be "something either I turned on/off incorrectly, but don't even know where to look".
Would this be considered a bug?
Libby H

Similar Messages

  • Data Load Best Practice.

    Hi,
    I needed to know what is the best way to load the data from a source. Is the SQL load the best way or using data files better? What are the inherent advantages and disadvantages of the two processes?
    Thanks for any help.

    I have faced a scenario that explaining here
    I had an ASO cube and data is being load from txt file daily basis and data was huge. There is some problem in data file as well as Master file (file that is being used for dimension building).
    Data and master file has some special character like ‘ , : ~ ` # $ % blank spaces and tab spaces, even ETL process cannot remove these things because this is coming within a data.
    Sometimes any comment or database error were also present in data file.
    I faced problem with making rule file with different delimiter, most of the time I find same character within data that is used as a delimiter. So its increases no of data field and Essbase give error.
    So I have used sql table a for data load .a Launch table is created and data is populated in this table. All error are removed here before using data load into Essbase
    This was my scenario (this case I find SQL load the second one is better)
    Thanks
    Dhanjit G.

  • How to load best practices data into CRM4.0 installation

    Hi,
      We have successfully installed CRM4.0 on a lab system and now would like to install the CRM best practice data into it.
      If I refer to the CRM BP help site http://help.sap.com/bp_crmv340/CRM_DE/index.htm,
    It looks like I need to install at least the following In order to run it properly.
    C73: CRM Essential Information 
    B01: CRM Generation 
    C71: CRM Connectivity 
    B09: CRM Replication 
    C10: CRM Master Data 
    B08: CRM Cross-Topic Functions
    I am not sure where to start and where to end. At the minimum level I need the CRM Sales to start with.
    Do we have just one installation CDs or a number of those, Also are those available in the download area of the service.sap.com?
    Appreciate the response.

    <b>Ofcourse</b> you need to install Best Practices Configuration, or do your own config.
    Simply installing CRM 4.0 from the distibutiond CD\DVD will get you a plain vanilla CRM system with no configuration and obviously no data.  The Best Practices guide you trhough the process of configuring CRM, and even has automated some tasks.  If you use some of the CATT processes of the Best Practices you can even populate data in your new system (BP data, or replace the input files with your own data)
    In 12 years of SAP consulting, I have NEVER come across a situation whereby you simply install SAP from the distribution media, and can start using it without ANY configuration.
    My advise is to work throught the base configuration modules first, either by importing the BP config/data or following the manual instruction to create the config/data yourself.  Next, look at what your usage of CRM is going to be, for example Internet Sales, Service Management, et cetera, and then install the config  for this/these modules.

  • NLS data conversion – best practice

    Hello,
    I have several tables originate from a database with a single byte character set. I want to load the data into a database with multi-byte character set like UTF-8, and in the future, be able to use the Unicode version of Oracle XE.
    When I'm using DDL scripts to create the tables on the new database, and after that trying to load the data, I receive a lot of error messages regarding the size of the VARCHAR2 fields (which, of course, makes sense).
    As I understand, I can solve the problem by doubling the size of the verachar2 fields: VARCHAR2(20) will become VARCHAR2(40) and so on. Another option is to use the NVARCHAR2 datatype, and retain the correlation with the number of characters in the field.
    I never used NVARCHAR2 before, so I don't know if there are any side affects on the pre-built APEX processes like Automatic DML, Automatic Row Fetch and the likes, or on the APEX import data mechanism.
    What will be the best practice solution for APEX?
    I'll appreciate any comments on the subjects,
    Arie.

    Hello,
    Thanks Maxim and Patrick for your replies.
    I started to answer Maxim when Patrick post came in. It's interesting as I tried to change this nls_length_semantics parameter once before, but without any success. I even wrote an APEX procedure to run over all my VARCHAR2 columns, and change them to something like VARCHAR2(20 char). However, I wasn't satisfied with this solution, partially because what Patrick said about developers forgetting the full syntax, and partially because I read that some of the internal procedures (mainly with LOBs) do not support this character mode and always working with byte mode.
    Changing the nls_length_semantics parameter seems like a very good solution, mainly because, as Patrick wrote, " The big advantage is that you don't have to change any scripts or PL/SQL code."
    I'm just curious, what is the technique APEX is using to run on all various, SB and MB character sets?
    Thanks,
    Arie.

  • Bean load best practice

    I am not new to Java, but up until now have been a programmer. I am now getting more into design and architecture and have a question about best practice. This question arises from a UML class I was taking. But in the class we stayed within the UML and did not get into implementation.
    My Question
    When creating classes and designing how they interact, what is the best practice for implementing associative relationships. For example, if I were modeling a Barn that contained Animals, I would create a Barn bean and an Animal bean. Since the Barn contained Animals I could create the code like this:
    public class Barn {
    String color;
    Collection animals;
    void setColor(String newcolor);
    String getColor( );
    void setAnimals(Collection newanimals);
    Collection getAnimals( );
    public class Animals{
    String name;
    void setName(String newname);
    String getName( );
    }The Collection within the Barn bean would be made up of Animal beans.
    This seems fairly straight forward. However, what if I loaded the bean from a database? When building the bean, do I also find all animals and build the Animal beans and create the Collection to store within the Barn object?
    Or
    Do I omit the animal Collection from my Barn bean and only populate the Collection at runtime, when someone calls the getAnimals method?
    I am confident that the latter is the better design for performance and synchonization issues. But I wanted to get other opinions.
    Do I need to read up more on object design?
    Thanks,
    Lonnie

    And lazy initialization. Basically, unless the data is needed. Don't load it.

  • Flat File load best practice

    Hi,
    I'm looking for a Flat File best practice for data loading.
    The need is to load a flat fle data into BI 7. The flat file structure has been standardized, but contains 4 slightly different flavors of data. Thus, some fields may be empty while others are mandatory. The idea is to have separate cubes at the end of the data flow.
    Onto the loading of said file:
    Is it best to load all data flavors into 1 PSA and then separate into 4 specific DSOs based on data type?
    Or should data be separated into separate file loads as early as PSA? So, have 4 DSources/PSAs and have separate flows from there-on up to cube?
    I guess pros/cons may come down to where the maintenance falls: separate files vs separate PSA/DSOs...??
    Appreciate any suggestions/advice.
    Thanks,
    Gregg

    I'm not sure if there is any best practise for this scenario (Or may be there is one). As this is more data related to a specific customer needs. But if I were you, I would handle one file into PSA and source the data according to its respective ODS. As that would give me more flexibility within BI to manipulate the data as needed without having to involve business for 4 different files (chances are that they will get them wrong  - splitting the files). So in case of any issue, your trouble shooting would start from PSA rather than going thru the file (very painful and frustating) to see which records in the file screwed up the report. I'm more comfortable handling BI objects rather than data files - coz you know where exactly you have look.

  • Unicode Migration using National Characterset data types - Best Practice ?

    I know that Oracle discourages the use of the national characterset and national characterset data types(NCHAR, NVARCHAR) but that is the route my company has decide to take and I would like to know what is the best practice regarding this specifically in relation to stored procedures.
    The database schema is being converted by changing all CHAR, VARCHAR and CLOB data types to NCHAR, NVARCHAR and NCLOB data types respectively and I would appreciate any suggestions regarding the changes that need to be made to stored procedures and if there are any hard and fast rules that need to be followed.
    Specific questions that I have are :
    1. Do CHAR and VARCHAR parameters need to be changed to NCHAR and NVARCHAR types ?
    2. Do CHAR and VARCHAR variables need to be changed to NCHAR and NVARCHAR types ?
    3. Do string literals need to be prefixed with 'N' in all cases ? e.g.
    in variable assignments - v_module_name := N'ABCD'
    in variable comparisons - IF v_sp_access_mode = N'DL'
    in calls to other procedures passing string parameters - proc_xyz(v_module_name, N'String Parameter')
    in database column comparisons - WHERE COLUMN_XYZ = N'ABCD'
    If anybody has been through a similar exercise, please share your experience and point out any additional changes that may be required in other areas.
    Database details are as follows and the application is written in COBOL and this is also being changed to be Unicode compliant:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    NLS_CHARACTERSET = WE8MSWIN1252
    NLS_NCHAR_CHARACTERSET = AL16UTF16

    ##1. while doing a test convertion I discovered that VARCHAR paramaters need to be changed to NVARCHAR2 and not VARCHAR2, same for VARCHAR variables.
    VARCHAR columns/parameters/variables should not by used as Oracle reserves the right to change their semantics in the future. You should use VARCHAR2/NVARCHAR2.
    ##3. Not sure I understand, are you saying that unicode columns(NVARCHAR2, NCHAR) in the database will only be able to store character strings made up from WE8MSWIN1252 characters ?
    No, I meant literals. You cannot include non-WE8MSWIN1252 characters into a literal. Actually, you can include them under certain conditions but they will be transformed to an escaped form. See also the UNISTR function.
    ## Reason given for going down this route is that our application works with SQL Server and Oracle and this was the best option
    ## to keep the code/schemas consistent between the two databases
    First, you have to keep two sets of scripts anyway because syntax of DDL is different between SQL Server and Oracle. There is therefore little benefit of just keeping the data type names the same while so many things need to be different. If I designed your system, I would use a DB-agnostic object repository and a script generator to produce either SQL Server or Oracle scripts with the appropriate data types or at least I would use some placeholder syntax to replace placeholders with appropriate data types per target system in the application installer.
    ## I don't know if it is possible to create a database in SQL Server with a Unicode characterset/collation like you can in Oracle, that would have been the better option.
    I am not an SQL Server expert but I think VARCHAR data types are restricted to Windows ANSI code pages and those do not include Unicode.
    -- Sergiusz

  • Data access best practice

    Oracle web site has an article talking about the 9iAS best practice. Predefining column type in the select statement is one of topics. The detail is following.
    3.5.5 Defining Column Types
    Defining column types provides the following benefits:
    (1) Saves a roundtrip to the database server.
    (2) Defines the datatype for every column of the expected result set.
    (3) For VARCHAR, VARCHAR2, CHAR and CHAR2, specifies their maximum length.
    The following example illustrates the use of this feature. It assumes you have
    imported the oracle.jdbc.* and java.sql.* interfaces and classes.
    //ds is a DataSource object
    Connection conn = ds.getConnection();
    PreparedStatement pstmt = conn.prepareStatement("select empno, ename, hiredate from emp");
    //Avoid a roundtrip to the database and describe the columns
    ((OraclePreparedStatement)pstmt).defineColumnType(1,Types.INTEGER);
    //Column #2 is a VARCHAR, we need to specify its max length
    ((OraclePreparedStatement)pstmt).defineColumnType(2,Types.VARCHAR,12);
    ((OraclePreparedStatement)pstmt).defineColumnType(3,Types.DATE);
    ResultSet rset = pstmt.executeQuery();
    while (rset.next())
    System.out.println(rset.getInt(1)+","+rset.getString(2)+","+rset.getDate(3));
    pstmt.close();
    Since I'm new to 9iAS, I'm not sure whether it's true that 9iAS really does an extra roundtrip to database just for the data type of the columns and then another roundtrip to get the data. Anyone can confirm it? Besides the above example uses the Oracle proprietary information.
    Is there any way to trace the db activities on the application server side without using enterprise monitor tool? Weblogic can dump all db activities to a log file so that they can be reviewed.
    thanks!

    Dear Srini,
    Data level Security is not at all issue for me. Have already implement it and so far not a single bug in testing is caught.
    It's about object level security and that too for 6 different types of user demanding different reports i.e. columns and detailed drill downs are different.
    Again these 6 types of users can be read only users or power users (who can do ad hoc analysis) may be BICONSUMER and BIAUTHOR.
    so need help regarding that...as we have to take decision soon.
    thanks,
    Yogen

  • Loader Best Practice Question

    The documentation is not very clear on what to do with
    regards to releasing memory etc.
    Typically, I'm loading an image and assigning the image to an
    <mx:image>. Do I then need to dispose/unload the loader? If
    someone would be kind enough to either explain this or provide a
    good code example (that implement best practice) I'd appreciate it.
    Thanks.

    There has been some consternation about this. Check this
    thread for instance:
    http://www.adobe.com/cfusion/webforums/forum/messageview.cfm?forumid=60&catid=587&threadid =1179158

  • Data Migration Best Practice

    Is the a clear cut best practice procedure for conducting data migration from one company to a new one ?

    I don't think there is a clear cut for that.  Best Practice would always be relative.  It varies dramatically depending on many factors.  There is no magical bullet here.
    One except for above: you should always use Tab delimited Text format.  It is DTW friendly format.
    Thanks,
    Gordon

  • Data Mining Best Practices

    Our organization is just beginning to use Data Mining in BI. We're trying to understand what the typical protocol is for moving data models into production? Is it standard practice to create the data models directly in the Production system or are these changes typically transported into the Production system? I have been unable to find any information on this in my research and would appreciate any input to help guide our decisions.
    Thanks,
    Nicole Daley

    Hi There,
    You're on the right track, here are a few additional guidelines:
    1. Determine your coverage levels along with the desired minimum data rate required by your application(s). Disabling lower data rates does have a significant impact to your coverage area.
    2. You have already prevented 802.11b clients by disabling  1,2,5.5,11 -- so that piece is taken care of.
    3. Typically, we see deployments having the lowest enabled data rate set to mandatory. This allows for the best client compatibility. You can also have higher mandatory rates, but then you need to confirm that all client devices will in fact support those higher rates. (Most clients do, but there are some exceptions). Worth noting here is that multicast traffic will be sent out at the highest mandatory data rate -- so if you have the need for higher bandwidth multicast traffic, you may want to have another data rate(s) set as mandatory.
    -Patrick Croak
    Wireless TAC

  • Help!  (Data Recovery best practices question)

    Recently my fiancé's Macbook (first White model that used an Intel chipset) running 10.6.2 began to behave strange (slow response time, hanging while applications launch, etc). I decided to take an old external USB HD I had lying around and format it on my MBP in order to time machine her photo's and itunes library. Time machine would not complete a backup and I could not get any of the folders to copy through finder(various file corrupt errors). I assumed it could be a permission issue so I inadvertantly fired up my 10.5 disk and did a permission repair. Afterwards the disk was even more flaky (which I believe was self inflicted when I repaired with 10.5).
    I've since created a 10.6.2 bootable flash key and went out and bought Disk Warrior (4.2). I ran a directory repair and several disk util repairs but was still unable to get the machine to behave properly (and unable to get time machine to complete). Attempting to run permission repairs while booted to USB or the Snow Leopard install disk resulted in it hanging at the '1 minute remaining' for well over an hour. My next step was to re-install Snow Leopard but the install keeps failing after the progress bar completes.
    As it stands now the volume on the internal HD is not bootable and I'm running off my usb key boot drive using 'CP -R *' in terminal to copy her user folder onto the external USB hard drive. It seems to be working, but it's painfully slow (somewhere along the lines of maybe 10 meg per half an hour with 30gb to copy) I'm guessing this speed has to do with my boot volume running off a flash drive.
    I'm thinking of running out and grabbing a firewire cable and doing a target boot from my MBP hoping that that would be a lot faster than what I'm experiencing now. My question is, would that be the wisest way to go? My plan of action was to grab her pictures and music then erase and reformat the drive. Is it possible that I could try something else with Disk Warrior? I've heard a lot of good things about it but I fear that I did a number on it when I accidently ran 10.5 permission repair on the volume.
    Any additional help would be appreciated as she has years of pictures on there that I'd hate to see her loose.

    That sounds like a sensible solution, although you need not replace the original drive. Install OS X on the external drive, boot from it and copy her data. Then erase her drive and use Disk Utility's Restore option to clone the external drive to the internal drive. If that works then she should continue using the external drive as a backup so the next time this happens she can restore from the backup.
    For next time: Repairing permissions is not a troubleshooting tool. It's rarely of any use and it does not repair permissions in a Home folder. If a system is becoming unresponsive or just slower then there's other things you should do. See the following:
    Kappy's Personal Suggestions for OS X Maintenance
    For disk repairs use Disk Utility. For situations DU cannot handle the best third-party utilities are: Disk Warrior; DW only fixes problems with the disk directory, but most disk problems are caused by directory corruption; Disk Warrior 4.x is now Intel Mac compatible. TechTool Pro provides additional repair options including file repair and recovery, system diagnostics, and disk defragmentation. TechTool Pro 4.5.1 or higher are Intel Mac compatible; Drive Genius is similar to TechTool Pro in terms of the various repair services provided. Versions 1.5.1 or later are Intel Mac compatible.
    OS X performs certain maintenance functions that are scheduled to occur on a daily, weekly, or monthly period. The maintenance scripts run in the early AM only if the computer is turned on 24/7 (no sleep.) If this isn't the case, then an excellent solution is to download and install a shareware utility such as Macaroni, JAW PseudoAnacron, or Anacron that will automate the maintenance activity regardless of whether the computer is turned off or asleep. Dependence upon third-party utilities to run the periodic maintenance scripts had been significantly reduced in Tiger and Leopard. These utilities have limited or no functionality with Snow Leopard and should not be installed.
    OS X automatically defrags files less than 20 MBs in size, so unless you have a disk full of very large files there's little need for defragmenting the hard drive. As for virus protection there are few if any such animals affecting OS X. You can protect the computer easily using the freeware Open Source virus protection software ClamXAV. Personally I would avoid most commercial anti-virus software because of their potential for causing problems.
    I would also recommend downloading the shareware utility TinkerTool System that you can use for periodic maintenance such as removing old logfiles and archives, clearing caches, etc. Other utilities are also available such as Onyx, Leopard Cache Cleaner, CockTail, and Xupport, for example.
    For emergency repairs install the freeware utility Applejack (not compatible with Snow Leopard.) If you cannot start up in OS X, you may be able to start in single-user mode from which you can run Applejack to do a whole set of repair and maintenance routines from the commandline. Note that AppleJack 1.5 is required for Leopard. AppleJack is not compatible with Snow Leopard.
    When you install any new system software or updates be sure to repair the hard drive and permissions beforehand. I also recommend booting into safe mode before doing system software updates.
    Get an external Firewire drive at least equal in size to the internal hard drive and make (and maintain) a bootable clone/backup. You can make a bootable clone using the Restore option of Disk Utility. You can also make and maintain clones with good backup software. My personal recommendations are (order is not significant):
    1. Retrospect Desktop (Commercial - not yet universal binary)
    2. Synchronize! Pro X (Commercial)
    3. Synk (Backup, Standard, or Pro)
    4. Deja Vu (Shareware)
    5. Carbon Copy Cloner (Donationware)
    6. SuperDuper! (Commercial)
    7. Intego Personal Backup (Commercial)
    8. Data Backup (Commercial)
    9. SilverKeeper 2.0 (Freeware)
    10. MimMac (Commercial)
    11. Tri-Backup (Commercial)
    Visit The XLab FAQs and read the FAQs on maintenance, optimization, virus protection, and backup and restore.
    Additional suggestions will be found in Mac Maintenance Quick Assist.
    Referenced software can be found at www.versiontracker.com and www.macupdate.com.

  • Manage Portal data ?   best practice type thing

    Hello,
    I am looking how best do set up a Portal System on Production, in particular a
    good process to back-up and re-create an Portal systems.
    I list some possibilities below. Does anyone know if this is how it is typically
    done ? Does this cover all data which should be backed up / migrated ?
    thanks!
    1- 'Entitlements' data. As far as I know, this is stored in the embedded LDAP.
    Can this be extracted ?
    2- DataSynch data.
    DataSynch web application.
    extract with ftp-like command
    upload as jar file
    3- Users and Groups.
    Export to a dat file. (Suddenly I forget how to do this, though think I saw it
    somewhere).

    Okey, and then using a RFC call from the webdynpro application to fetch data from the sap database?
    This answered my question:
    Best regards
    Øyvind Isaksen

  • Servlet - xml data storage best practice

    Hello - I am creating a webapp that is a combination of servlets and jsp. The app will access, store and manipulate data in an xml file. I hope to deploy and distribute the webapp as a war file. I have been told that it is a bad idea to assume that the xml file, if included in a directory of the war file will be writeable, as the servlet spec does not guarantee that war are "exploded" into real file space. For that matter, they do not guarantee that the file space is writeable at all.
    So, what is the best idea for the placement of this xml file? Should I have users create a home directory for the xml file to sit in, so it can be guaranteed to be writeable? And, if so, how should I configure the webapp so they it will know where this file is kept?
    Any advice would be gratefully welcomed...
    Paul Phillips

    Great Question, but I need to take it a little further.
    First of all, my advice is to use some independent home directory for the xml file that can be located via a properties file or the like.
    This will make life easier when trying to deploy to a server such as JBoss (with Catalina/Tomcat) which doesn't extract the war file into some directory. In that case you would need to access your XML file which would be residing inside a war file. I haven't tried this (sounds painful) but I suspect there may be security access problems when trying to get the FileOutputStream on a file inside the war??
    Anyway.... so I recommend the independent directory away from the hustle and bustle of the servers' directories. Having said that..... I have a question in return: Where do you put a newly created (on the fly) jsp that you want accessed via your webapp?
    In Tomcat its easy... just put it in the tomcat/webapps/myapp directory, but this can't be done for JBoss with integrated Tomcat (jboss-3.0.0RC1_tomcat-4.0.3).
    Anyone got any ideas on that one?

  • Data Model best Practices for Large Data Models

    We are currently rolling out Hyperion IR 11.1.x and are trying to establish best practces for BQYs and how to display these models to our end users.
    So far, we have created an OCE file that limits the selectable tables to only those that are within the model.
    Then, we created a BQY that brings in the tables to a data model, created metatopics for the main tables and integrated the descriptions via lookups in the meta topics.
    This seems to be ok, however, anytime I try to add items to a query, as soon as i add columns from different tables, the app freezes up, hogs a bunch of memory and then closes itself.
    Obviously, this isnt' acceptable to be given to our end users like this, so i'm asking for suggestions.
    Are there settings I can change to get around this memory sucking issue? Do I need to use a smaller model?
    and in general, how are you all deploying this tool to your users? Our users are accustomed to a pre built data model so they can just click and add the fields they want and hit submit. How do I get close to that ideal with this tool?
    thanks for any help/advice.

    I answered my own question. in the case of the large data model, the tool by default was attempting to calculate every possible join path to get from Table A to Table B (even though there is a direct join between them).
    in the data model options, I changed the join setting to use the join path with the least number of topics. This skipped the extraneous steps and allowed me to proceed as normal.
    hope this helps anyone else who may bump into this issue.

Maybe you are looking for

  • Auto-Copying data from a web browser

    Well i am in the middle of a self-project and i was wondering if there was any way : Lets say i open the browser at a specific web page , the web page URL will be read ( i think i now how to do this ) and if it matches to the one that i want it will

  • In Mail, is there any way to prevent the ampersand 20 symbol in downloaded attachments?

    Long title, but as it suggests, every time I download an attachment in Mail that has a space in the title, Mail puts in %20 to replace the space (ascii symbol for space).  I'd like to prevent this and have attachments stored under the name they are s

  • AIRPORT EXPRESS SOFTWARE WONT INSTALL

    Hello All, Anyone know a reason why I can't install the Airport Express software on my Mac Book Pro? When I place the disk in the drive the icon comes up on the desktop. I open the icon and click on the installer to begin the installation process. On

  • PO Details based on Vendor

    Hi Group, Can any body suggest on FM/BAPi to get PO details based on givne Vendor/Suppliet number.

  • Sometimes the news is good too ...

    People who read this forum sometimes get the view that problems abound, but that's not the case (like others have said if you walk into a hospital you'll think everyone is sick). So to those of you who fear going to 2.1 let me give you my experience.