Sales schema and data load script

Hi,
I am trying to learn OBIEE on my own. i managed to install oracle XE and OBIEE 10g. I need a script to create a new db schema and load sales tables with sample data to create reports. can anyone help me with the scripts?

AsI mentioned, i dont this these scripts will come with a XE installl...primarily because XE may not be certified with the sample schema. This is why I would recommend you try to install the full DB instead of trying to run these scripts on XE.
Did you try to install the ACTUAL Oracle DB Enterprise Edition? I believe there is a examples part to the install that will have these scripts. Please try this and let me know.
If this helped, please mark as helpful or correct.

Similar Messages

  • Automate EIS dim builds and data loads

    I want to automate the dimesion builds and data loads from my ETL tool (DTS). I have not been able to find anything about scripting EIS automation in the documentation. Is there any?

    what you can do is create go into EIS metadata outline and create a member load and data load script. Do this by selecting the Outline menu item, then select member load. click next, on this screen, select only save load script. Click the button "Save scripts" to give it a name. click finish. repeat for the dataload script. (If you are using ASO cubes, you must use separate scripts, you can't do both in one script)Then create a batch file to run the member load and data loads. In DTS, use an execute process task to run the batch file

  • Database compare script to sync schema and data

    1. I have database-1
    2. I have database-2
    3. database-1 and database-2 are in same oracle 10g server and are same in schema as well as data
    4. All operations will be done with database-1 through my application
    5. At the end of the day i need a script which is to be run in the database-2 to have database-1 and database-2
    alike ie., both schema and Data should be same
    Please anyone suggest me the best solution to get the above script
    This script is to be transfered to some other location in our scenario
    Thanks in advance
    Vivek

    Hi Sybrand Bakker,
    I tried streams for the replication purpose as per your suggesion, till Iam unable make it work , i dont find a step by step document which will make it possible without error.
    One more thing , i need the streams work without Database link, ie., source database is not connected directly to destination database. I need to create streams and transfer the stream as a file through FTP.I need to download stream file in remote location and then apply the streams to destination database and after this source and destination database should be same in data and schema.
    Please suggest me a solution to go abt this scenario. We are in critical stage to make it happen...
    thanking you in advance
    with regards
    vivek
    Message was edited by:
    Vivekanandh

  • Training on CalcScripts, Reporting Scritps, MaxL and Data Loading

    Hi All
    I am new to this forum. I am looking for someone who can train me on topics like CalcSripts, Reporting Scripts, MaxL and Data Loading.
    I am willing to pay for your time. Please let me know.
    Thanks

    Hi Friend,
    As you seems to be new to essbase,you must learn What is Essbase, OLAP, Difference between Dense & Sparse, and then use "essbase tech ref" for more reference
    After that go through
    https://blogs.oracle.com/HyperionPlanning/and start exploring CalcScript, Maxl etc
    and all this for you free free free..........
    Thanks,
    Avneet

  • EIS Member and Data Load-Getting OS Error-Please help!

    Please help! I have created a OLAP model then created a Metaoutline.
    Then I went ahead to do the Member and Data load. I logged into my server and started the member and data load.
    Then it gives me the following error:
    SELECT /*+ */ .. FROM <my_view_name>
    OS Error No such file or directory IS Error Member load terminated with error.
    The load terminated with errors.
    Thanks in advance for any replies.

    thanks all! the error has been resolved.
    Jus had to create a directory in the Integration services folder: $ISHOME/loadinfo
    the loadinfo folder was missing.
    Prathap,
    Is that view available at that time? --the query is generated automatically.
    Which Data Source and which version of the Hyperion - datasource is Oracle10g and 9.3 is Hyp version.

  • How can i add the dimensions and data loading into planning apllications?

    Now please let me know how can i add the dimensions and data loading into planning apllication without manuallly?

    you can use tools like ODI or DIM or HAL to load metadata & data into planning applications.
    The data load can be done at the Essbase end using rules file. But metadata changes should flow from planning to essbase through any of above mentioned tools and also there are many other way to achieve the same.
    - Krish

  • Oracle DB schema and data comparison tools that compare BLOBs

    I'm looking for schema and data comparison tool like DBDiff or DbTools, but it has to be able to compare BLOB and CLOB fields. I went thru few products available on the market but could not find any that does that.
    Can you please recommend tool that will help me with it.
    Thanks,
    E

    Hi.
    I use Comparenicus for Oracle from Orbium Software. It compares data and schema, CLOBs, BLOBs..
    It can also handle large tables which is very useful for some of my environments.
    Last (but not least) it has a unique feature for copying selective data from one DB to another. You can read about it this post:
    Efficient way to copy business data from Production DB to Test DB
    Enjoy..

  • Is there any bapi that i can create sales order and date

    Hi Gurus
    is there any bapi that i can create sales order and date .

    This appears to be related to Problem With BAPI_SALESORDER_CREATEFROMDATA but without the detail. If you want to ask again, please add that detail.
    locked
    Rob

  • What is PMS Data Migration Experience and Data Migration Scripts preparatio

    Hi All what is PMS Data Migration Experience and Data Migration Scripts preparation
    why we use these in HRMS(EBS)

    Pl provide a link to where you got these terms from.
    "PMS Data Migration Experience" is not an EBS term - most likely a vendor or third-party phrase/tool/software.
    "Data Migration" in HRMS (and other EBS modules) typically refers to the process of moving data from a legacy system to EBS during the process of going-live with EBS.
    HTH
    Srini

  • Need data loader script for Buyer Creation for R12

    Hi ,
    Anybody has data loader script for Buyer creation in R12. Please copy paste one line in the reply on this message.
    Thanks

    Hi ,
    Anybody has data loader script for Buyer creation in R12. Please copy paste one line in the reply on this message.
    Thanks

  • Selective Deletion and Data Load from Setup Tables for LIS Extractor

    Hi All,
    We came across a situation where one of the delta in PSA was missed to load in DSO. This DSO updates another cube in the flow. Since it has been many days since this miss come in our knowledge we need to selectively delete data for those documents from DSO & Cube and then take a full load for the Documents filling the setup table.
    Now what will be the right approach to load this data from setup table > DSO > cube. There is change log present for those documents and a few KPI's in DSO are in summation mode.
    Regards
    Jitendra

    thanks Ajeet!!!!
    This is Sales Order extractor, the data got loaded to ODS just fine, but since the data is coming to ODS from different extractor. Everything is fine in ODS, but not in the cube. Will Full repair request and Full load would it make difference when the data is going to cube? I thought that it would matter only if I am loading to ODS.
    what do you mean "Even if you do a full load without any selections you should do a full repair ".
    thanks.
    W

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • What are the CRM Trasaction data Data Sources and Data load Procedure

    Hi BI Gurus,
    Does anybody provide the CRM Transaction data DataSources names and Load procedure into BI7.0
    I know the master Data load procedure from CRM to BI7.0.
    if you provide Step-by-Step documents it is more help.
    Thanks in Advance,
    Venkat

    Hi Venkat
    In order to find the transactions want you can login the CRM system and then use the transaction RSA6. There you can expand all the subtrees by left clicking on the first line and then clicking expand the further left button. After that you can easily search any datasource you may want
    Hope that helps
    Rgds
    John

  • Relationship between ERPi metadata rules and Data load rules

    I am using version 11.1.1.3 to load EBS data to ERPi and then to FDM. I have created a Metadata rule where I have assigned a ledger from the Source Accounting Entities. I have also created a Data Load rule that references the same ledger as the metadata rule. I have about 50 ledgers that I have to integrate so I have added the source adapters that reference the Data Load rule.
    My question is... What is the relation between the Meatdata rule and the Data load rule with the ledger? If you can specify only one Ledger per metadata rule, then how does FDM know to use another metadata rule with another ledger attached to it? Thanks!!!

    Aksik
    1 How freequently this activation problem occurs. If it is one time replicate the datasource and activate thetransfer structure( But in general as you know activation of transfer structure should be done automatically after transport of the object)
    2 One thing for difference of time is environmental as you know in production system so many jobs will run at the same time so obiously system performance will be slow compare to Dev System. In your case both the systems are performing equally. You said in dev system for 50000 records half an hour and in production 200000 records 2hrs so records are more in Production system and it took longer time. If it is really causing problem then you have to do some performance activities.
    Hope this helps
    Thnaks
    Sat

  • How to export app +underlying schema(and data)

    Hi
    I am trying to export my application and its underlying tables to a new system .
    I want to know if it can be done together and not seperately..(export app and then export each table)
    Is there away to package the app and its underlying repository?
    Thanks

    No....
    You will have to do them seperately as there is no logical connection between the application and the tables.
    Your application just has the access to use those tables and does not "own" any of them.
    a) Oracle database tables/ application tables
    You can export the entire schema and all the objects and data in that schema using the export and import utilities.
    All these objects are owned by the schema (eg. emp under scott) and hence are related logically.
    b) Oracle apex metadata tables.
    All the information about apex objects is stored as metadata in database tables (FLOWS_0300 is a schema).
    So, if you install Oracle apex on a new machine, all these repository tables will be created.
    When you create applications / objects, these tables will have metadata for those applications.
    Hope this helps,
    Rajesh.

Maybe you are looking for

  • Exit Code: 6, Ps Cs5 trial install.

    hi, ive downloaded the trial of Photoshop Cs5 but it wont install, i get to about 10% in the install and then i get "Exit Code: 6". ill begin with my system specs: (Swedish language) Windows 7 Ultimate 64-bit Intel Core 2 Duo E6600 @ 2.40GHz 2.39GHz

  • No Code Hinting in Air

    When I use AIR in the eclipse IDE I cant get any code hinting to show if I am using an AIR application. If its a Flex application its fine. Even if I have two applications open I can just click in the non-AIR window and code hinting is working and cl

  • JSP/EJB compilation:  don't put .java files in EJB jars!

    I was having trouble compiling a JSP that uses an EJB remote interface.           The Weblogic console printed an error like this:           Thu Aug 31 14:13:58 PDT 2000:<E> <ServletContext-General> Compilation of           D:\weblogic\myserver\class

  • Can I avoid being redirected to "mobile" web sites?

    Whenever I go on sites like CNN or, lately espn, I am automatically redirected to their mobile site instead. This is not what I want. Can I stop it??

  • A general question only regarding math and computer

    Do you think the following statement that is a question? question 1) A set of all strings 0's and 1's not containing 101 as a substring it can be answered as Regular Expression, contructing a DFA or Others I would like to find out that as a Engineer