BP 258 Sample Data Load

Hello,
I've configured the prereq's for the Best Practices for BPC MS 7.5 (V1.31) and am on BP 258 trying to upload the sample data provided (258_Y2010_Sales_Actual.txt).  When I run the DM package for IMPORT, it errors out with :
Task Name : Convert Data
Dimension  list          CATEGORY,P_CC,RPTCURRENCY,TIME,CUSTOMER,PRODUCT,SALESACCOUNT,AMOUNT
[2010.004] Calculated member or invalid member          ACTUAL,SALESTEAM1,USD,2010.004,CUSTOMERA,PRODUCTA,SalesUnitPrice,-15000
[2010.004] Calculated member or invalid member          ACTUAL,SALESTEAM1,USD,2010.004,CUSTOMERA,PRODUCTA,SalesQuantity,200
[2010.004] Calculated member or invalid member          ACTUAL,SALESTEAM1,USD,2010.004,CUSTOMERA,PRODUCTA,TotalSalesRevenue,-3000000
All I can figure from this limited info is that it does not like the conversion of the TIME dimension.  I checked my dim members and have time dim's covering the ones shown in the file (2010.004 for example).  The BP doc says nothing about maintaining the conversion file or a custom transformation file - it says to leave that empty to default to IMPORT.xls. 
I just finished the BPC 410 course, but can't correlate what we did to this error (or I missed it). 
Can anyone shed some light on this error? 
thanks

Thank you for responding.  I ran through all checks and all are OK to me. 
1 - Optimize all green no errors
2 - Yes I already checked this but am not 100% sure I am seeing what the system is seeing.  My dim members include the TIME items that exist in the sample data.  Only one value did not exist = members start on 2010.004 and sample data included 2010.001.  I removed those records, but it still fails.
Here is the reject list:
Task Name : Convert Data
[Dimension:  TIME]
2010.007
2011.002
2010.005
2010.008
2011.001
2010.006
2011.003
2010.011
2010.004
2010.001
2010.012
2010.009
This tells me (an inexperienced BPC person) there is a problem in conversion of the external to internal data for that dimension.  However, I have already validated that they are equal - I can't see how 2010.004 does not equate to 2010.004, unless it is comparing to the EVDESC and not the ID (2010.004 versus 2010 APR).  Am I correct on that assumption? 
3 - Yes I've also tried changing to a new transformation file with conversion and delimiters differently mapped, but all the same result. I'm sure I am missing something trivial here!  So I do appreciate any help to figure that out. 
Thanks again

Similar Messages

  • FDQM Sample data load file for HFM

    Hi all,
    I have just started working on FDQM 11.1.1.3. I had done integration of HFM application with FDQM. I required to load data to HFM application i don't have any gl file or csv file with hfm dimension . so any one can help me get this file.......
    and one more thing.......
    i just want to know what is the basic steps i need to perform to load data to HFM application after creating fdm application and integrating with hfm application.
    Thanks.....

    Hi,
    After creating fdm application and integrating with hfm application also includes the Target Application name setting in FDM
    Now the FDM application is ready with its target set to HFM application
    1. You can create an Import format as below for the application using only Custom1 and Custom2 dimensions from 4 available Custom dimensions(you can modify according to your dimensions):
    Use '|'(Pipe as delimeter)
    Account|Account Desription|Entity|C1|C2|Amount
    2. You can create a text file with data like mentioned below by making a combination of members from each dimension specified in import format:
    11001|Capex|111|000|000|500000
    11002|b-Capex|111|000|000|600000
    Note: these dimension memers should be mapped in 'Mappings' section; use the members specified in file as source and select any target member of HFM for them
    3. Then tag this import format with any location
    4. Import the text file using 'Browse' option
    5. Validate the data loaded
    Note: If mapping is done for all dimension members used in file, then validation will be successful
    6. Export the data
    Now you can export and hence load the data to HFM application using Replace/Merge option.
    Here you are with the basic steps to load the data from FDM to HFM.
    Regards,
    J

  • Data Load Tables - Shared components

    Hello All,
    There is a section that is called Data Loading in Shared components in every application that says: A Data Load Table is an existing table in your schema that has been selected for use in the data loading process, to upload data. Use Data Load Tables to define tables for use in the Data Loading create page wizard.
    The question is: How ca i select a table in my schema for use in the data loading process to upload data using the wizard?
    There is a packaged application that is called Sample Data loading. That sample is use for specific tables right? I tried to change those tables for the ones that I want to use but I could not because I could not add the tables that I want to use....
    Thank you.

    Hi,
    The APEX version is Application Express 4.2.3.00.07.
    The application sample for data loading it created the data loading entry in shared components by default. In that part, I don't have the option to create a new one for the table that I want to load the data. I tried to modify that entry putting the table that I want, but i couldn't, because the table that it has it's not editable.
    I tried to modify the Data Loading page that the application sample created but I couldn't. I can't change the source for the table that I want.
    I have created a workspace at apex.oracle.com. If you want I can give you credentials to help me please, but I need your email for create the user for you. Thank you.
    Bernardo

  • EPM 11.1.2.2: Load sample data

    I have installed EPM 11.1.2.2 and now I'm trying to load the sample data. I could use help locating instructions that will help with this. While earlier versions of epm documentation seemed to have instructions for loading the sample data, I have yet to find it for 11.1.2.2. Perhaps the earlier versions' steps are supposed to work on this version as well. So I read it. It says to initialize the app then in EAS right click on a consol icon where yuo have the option to load data. Unfortunately, I am using EPMA and I saw no option to initialize when I created the app. In addition, EAS doesnt have the console icon and right clicking on the application doesnt do anything. In short, prior documentation hasnt helped so far. I did find the documentation for EPM 11.1.2.2 but I have yet to locate within it the the instructions for loading the sample application and data. Does anyone have any suggestions?

    I considered your comment about the app already existing but if this were true, I think the database would exist in EAS but it doesn't.
    I've also deleted all applications, even re-installed the essbase components including the database so it would all be fresh, and then tried to create a new one but still no initialization option is available. At this point, I'm assuming there is something wrong with my installation. Any suggestion on how to proceed would be appreciated.
    I've noticed that I dont have the create classic application option in my menu, only the transform from classic to epma option. I also can't navigate to http://<machinename>:8300/HyperionPlanning/AppWizard.jsp. When I noticed these things, I did as John suggested in another post and reselected the web server. This didnt make a difference so I redeployed apps AND selected the web server. Still not seeing it in the menu.
    Edited by: dirkp:) on Apr 8, 2013 5:45 PM

  • Sample SOAP request for Data Loader API

    Hi
    Can anyone please help me out in giving a sample SOAP request for Data Loader API .This is to say import 1K records from my system to the CRM instance I have .

    Log into the application and then click on Training and Support there is a WS Library of Information within the application

  • Loading BI Apps with demo (sample) data

    I would like to load the out of the box OBI apps with sample data from Oracle so that the clients see how Emoployee Expense reports would look like. does anyone know how to go about it . In simple terms , this is more like using BISAMPLE schema in OBIEE. Any help willl be greatly appreciated.

    Are you asking for sample data for the Fusion Applications similar to what EBS had with the "Vision" database, e.g. data populated in the Fusion Applications schema OOTB ? If so unfortunately such data is not available.
    Jani Rautiainen
    Fusion Applications Developer Relations
    https://blogs.oracle.com/fadevrel/

  • Data load from Legacy system to BW Server through BAPI

    Requirements: We have different kind of legacy systems and SAP BW server. We want to load all legacy system data into SAP BW server using BAPI. Before loading we have to validate all data. If there are bad data, data missing we have to let the legacy system user/ operator knows to fix the data into their system with detail explanation. When it is fixed, we have to load the data again.
    Load Scenario:  We have two options to load data from legacy systems to BW Server.
    1.     We need to load data directly from legacy system to BW Server using BAPI program.
    2.     Legacy Systems data would be in workstations or flash drive as .txt (one line separated by comma) or .csv file. Need to load from .txt /.csv file to BW Server using BAPI program.
    What we want in the BAPI program code?
    It will Read / Retrieve data from text / csv file and will put into the Internal table. Internal table structure would be based on BAPI InfoObject structure.
    Call BAPI InfoObject function module ‘BAPI_IOBJ_CREATE’ to create InfoObject, include all necessary / default components, do the error check, load the data and return the status.
    Could some one help me with the sample code please? I am new in ABAP / BAPI coding.
    Is there any other better idea to load data from legacy system to BW server? BTW we are using BW 3.10. Is there any better option with BI 7.0 to resolve the issue? I appreciate your help.

    my answers:
    1. this is a scendario for a data push into SAP BW. You can only use SOAP-Based Transfer of Data.
    http://help.sap.com/saphelp_nw04/helpdata/en/fd/8012403dbedd5fe10000000a155106/frameset.htm
    (here for BW 3.5, but you'll find similar for 7.0)
    In this scenario you'll have an RFC dinamically created for every Infosource you need to transfer data.
    2. You can make a chain for each data load, so you can call the RFC "RSPC_API_CHAIN_START" to start the chain from external.
    the second solution is more simply and available on every release.
    Regards,
    Sergio

  • How to tune data loading time in BSO using 14 rules files ?

    Hello there,
    I'm using Hyperion-Essbase-Admin-Services v11.1.1.2 and the BSO Option.
    In a nightly process using MAXL i load new data into one Essbase-cube.
    In this nightly update process 14 account-members are updated by running 14 rules files one after another.
    These rules files connect 14 times by sql-connection to the same oracle database and the same table.
    I use this procedure because i cannot load 2 or more data fields using one rules file.
    It takes a long time to load up 14 accounts one after other.
    Now my Question: How can I minimise this data loading time ?
    This is what I found on Oracle Homepage:
    What's New
    Oracle Essbase V.11.1.1 Release Highlights
    Parallel SQL Data Loads- Supports up to 8 rules files via temporary load buffers.
    In an Older Thread John said:
    As it is version 11 why not use parallel sql loading, you can specify up to 8 load rules to load data in parallel.
    Example:
    import database AsoSamp.Sample data
    connect as TBC identified by 'password'
    using multiple rules_file 'rule1','rule2'
    to load_buffer_block starting with buffer_id 100
    on error write to "error.txt";
    But this is for ASO Option only.
    Can I use it in my MAXL also for BSO ?? Is there a sample ?
    What else is possible to tune up nightly update time ??
    Thanks in advance for every tip,
    Zeljko

    Thanks a lot for your support. I’m just a little confused.
    I will use an example to illustrate my problem a bit more clearly.
    This is the basic table, in my case a view, which is queried by all 14 rules files:
    column1 --- column2 --- column3 --- column4 --- ... ---column n
    dim 1 --- dim 2 --- dim 3 --- data1 --- data2 --- data3 --- ... --- data 14
    Region -- ID --- Product --- sales --- cogs ---- discounts --- ... --- amount
    West --- D1 --- Coffee --- 11001 --- 1,322 --- 10789 --- ... --- 548
    West --- D2 --- Tea10 --- 12011 --- 1,325 --- 10548 --- ... --- 589
    West --- S1 --- Tea10 --- 14115 --- 1,699 --- 10145 --- ... --- 852
    West --- C3 --- Tea10 --- 21053 --- 1,588 --- 10998 --- ... --- 981
    East ---- S2 --- Coffee --- 15563 --- 1,458 --- 10991 --- ... --- 876
    East ---- D1 --- Tea10 --- 15894 --- 1,664 --- 11615 --- ... --- 156
    East ---- D3 --- Coffee --- 19689 --- 1,989 --- 15615 --- ... --- 986
    East ---- C1 --- Coffee --- 18897 --- 1,988 --- 11898 --- ... --- 256
    East ---- C3 --- Tea10 --- 11699 --- 1,328 --- 12156 --- ... --- 9896
    Following 3 out of 14 (load-) rules files to load the data columns into the cube:
    Rules File1:
    dim 1 --- dim 2 --- dim 3 --- sales --- ignore --- ignore --- ... --- ignore
    Rules File2:
    dim 1 --- dim 2 --- dim 3 --- ignore --- cogs --- ignore --- ... --- ignore
    Rules File14:
    dim 1 --- dim 2 --- dim 3 --- ignore --- ignore --- ignore --- ... --- amount
    Is the upper table design what GlennS mentioned as a "Data" column concept which only allows a single numeric data value ?
    In this case I cant tag two or more columns as “Data fields”. I just can tag one column as “Data field”. Other data fields I have to tag as “ignore fields during data load”. Otherwise, when I validate the rules file, an Error occurs “only one field can contain the Data Field attribute”.
    Or may I skip this error massage and just try to tag all 14 fields as “Data fields” and “load data” ?
    Please advise.
    Am I right that the other way is to reconstruct the table/view (and the rules files) like follows to load all of the data in one pass:
    dim 0 --- dim 1 --- dim 2 --- dim 3 --- data
    Account --- Region -- ID --- Product --- data
    sales --- West --- D1 --- Coffee --- 11001
    sales --- West --- D2 --- Tea10 --- 12011
    sales --- West --- S1 --- Tea10 --- 14115
    sales --- West --- C3 --- Tea10 --- 21053
    sales --- East ---- S2 --- Coffee --- 15563
    sales --- East ---- D1 --- Tea10 --- 15894
    sales --- East ---- D3 --- Coffee --- 19689
    sales --- East ---- C1 --- Coffee --- 18897
    sales --- East ---- C3 --- Tea10 --- 11699
    cogs --- West --- D1 --- Coffee --- 1,322
    cogs --- West --- D2 --- Tea10 --- 1,325
    cogs --- West --- S1 --- Tea10 --- 1,699
    cogs --- West --- C3 --- Tea10 --- 1,588
    cogs --- East ---- S2 --- Coffee --- 1,458
    cogs --- East ---- D1 --- Tea10 --- 1,664
    cogs --- East ---- D3 --- Coffee --- 1,989
    cogs --- East ---- C1 --- Coffee --- 1,988
    cogs --- East ---- C3 --- Tea10 --- 1,328
    discounts --- West --- D1 --- Coffee --- 10789
    discounts --- West --- D2 --- Tea10 --- 10548
    discounts --- West --- S1 --- Tea10 --- 10145
    discounts --- West --- C3 --- Tea10 --- 10998
    discounts --- East ---- S2 --- Coffee --- 10991
    discounts --- East ---- D1 --- Tea10 --- 11615
    discounts --- East ---- D3 --- Coffee --- 15615
    discounts --- East ---- C1 --- Coffee --- 11898
    discounts --- East ---- C3 --- Tea10 --- 12156
    amount --- West --- D1 --- Coffee --- 548
    amount --- West --- D2 --- Tea10 --- 589
    amount --- West --- S1 --- Tea10 --- 852
    amount --- West --- C3 --- Tea10 --- 981
    amount --- East ---- S2 --- Coffee --- 876
    amount --- East ---- D1 --- Tea10 --- 156
    amount --- East ---- D3 --- Coffee --- 986
    amount --- East ---- C1 --- Coffee --- 256
    amount --- East ---- C3 --- Tea10 --- 9896
    And the third way is to adjust the essbase.cfg parameters DLTHREADSPREPARE and DLTHREADSWRITE (and DLSINGLETHREADPERSTAGE)
    I just want to be sure that I understand your suggestions.
    Many thanks for awesome help,
    Zeljko

  • Adding leading zeros before data loaded into DSO

    Hi
    In below PROD_ID... In some ID leading zeros are missing before data loaded into BI from SRM into PROD_ID. Data type is character. If leading zeros are missing then data activation of DSO is failed due to missing zeros and have to manually add them in PSA table. I want to add leading zeros if they're missing before data loaded into DSO.... total character length is 40.. so e.g. if character is 1502 then there should be 36 zeros before it and if character is 265721 then there should be 34 zeros. Only two type of character is coming either length is 4 or 6 so there will be always need to 34 or 36 zeros in front of them if zeros are missing.
    Can we use CONVERSION_EXIT_ALPHPA_INPUT functional module ? As this is char so I'm not sure how to use in that case.. Do need to convert it first integer?
    Can someone please give me sample code? We're using BW 3.5 data flow to load data into DSO.... please give sample code and where need to write code either in rule type or in start routine...

    Hi,
    Can you check at info object level, what kind of conversion routine it used by.
    Use T code - RSD1, enter your info object and display it.
    Even at data source level also you can see external/internal format what it maintained.
    if your info object was using ALPHA conversion then it will have leading 0s automatically.
    Can you check from source how its coming, check at RSA3.
    if your receiving this issue for records only then you need to check those records.
    Thanks

  • Navteq sample data

    hi,
    I have problems during the navteq sampla data import (http://www.oracle.com/technology/software/products/mapviewer/htdocs/navsoft.html). I'm follwing exactly the instructions from the readme file.
    Import: Release 10.2.0.1.0 - Production on Thursday, 25 September, 2008 10:35:28
    Copyright (c) 2003, 2005, Oracle. All rights reserved.
    Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
    With the Partitioning, OLAP and Data Mining options
    Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/********@ORCL REMAP_SCHEMA=world_sample:world_sample dumpfile=world_sample.dmp directory=sample_world_data_dir
    Processing object type TABLE_EXPORT/TABLE/TABLE
    Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
    . . imported "WORLD_SAMPLE"."M_ADMIN_AREA4" 394.5 MB 118201 r
    After the last log message the systems hangs with 50% cpu time and nothing happens. What could be the problem? Only the table M_ADMIN_AREA4 contains the 118201 rows.
    Regards, Arnd
    Edited by: aspie on Sep 25, 2008 11:45 AM

    Hi, I followed the instructions from the readme file:
    1. Create a work directory on your machine. Change directory to your work
    directory and download the world_sample.zip file into it. Unzip the
    world_sample.zip file:
    C:\> mkdir world_sample
    C:\> cd world_sample
    C:\world_sample> unzip world_sample.zip
    2. In your work directory, open a Sqlplus session and connect as the SYSTEM
    user. Create a user for the World Sample data (if you have not previously
    done so):
    C:\world_sample> sqlplus SYSTEM/password_for_system
    SQL> CREATE USER world_sample IDENTIFIED BY world_sample;
    SQL> GRANT CONNECT, RESOURCE TO world_sample;
    3. This step loads the World Sample data into the specified user, and creates
    the metadata required to view the sample data as a Map in MapViewer. Before
    you begin, make a note of the full directory path of your work directory,
    since you will be prompted for it. You will also be prompted for the user
    (created in step 2), and the SYSTEM-user password. This step assumes that
    you are already connected as the SYSTEM user, and that you are in your work
    directory. To begin, run the load_sample_data.sql script in your Sqlplus
    session. Exit the Sqlplus session after the script has successfully
    concluded:
    SQL> @load_sample_data.sql
              Enter value for directory_name: C:\world_sample
              Enter value for user_name: world_sample
              Password: password_for_system
    SQL> exit;
    In step 2 I created a user dedicated tablespace. The same problem.
    Regards Arnd.

  • Oracle Data Loader On Demand on EHA Pod

    Oracle Data Loader doesn't work correctly.
    I downloaded it from Staging(EHA Pod).
    And I did the following work.
    1.Move to "config" folder,and update "OracleDataLoaderOnDemand.config".
    hosturl=https://secure-ausomxeha.crmondemand.com
    2.Move to "sample" folder,and change Owner_Full_Name at "account-insert.csv".
    And at the command prompt,run the batch file.
    It runs successfully,but records aren't inserted on EHA Pod.Records exist on EGA Pod.
    This is the log.
    Is Data Loader for only EGA Pod?Would please give me some advices?
    [2012-09-19 14:49:55,281] DEBUG - [main] BulkOpsClient.main(): Execution begin.
    [2012-09-19 14:49:55,281] DEBUG - [main] BulkOpsClient.main(): List of all configurations loaded: {sessionkeepchkinterval=300, maxthreadfailure=1, testmode=production, logintimeoutms=180000, csvblocksize=1000, maxsoapsize=10240, impstatchkinterval=30, numofthreads=1, hosturl=https://secure-ausomxeha.crmondemand.com, maxloginattempts=1, routingurl=https://sso.crmondemand.com, manifestfiledir=.\Manifest\}
    [2012-09-19 14:49:55,281] DEBUG - [main] BulkOpsClient.main(): List of all options loaded: {datafilepath=sample/account-insert.csv, waitforcompletion=False, clientlogfiledir=., datetimeformat=usa, operation=insert, username=XXXX/XXXX, help=False, disableimportaudit=False, clientloglevel=detailed, mapfilepath=sample/account.map, duplicatecheckoption=externalid, csvdelimiter=,, importloglevel=errors, recordtype=account}
    [2012-09-19 14:49:55,296] DEBUG - [main] BulkOpsClientUtil.getPassword(): Entering.
    [2012-09-19 14:49:59,828] DEBUG - [main] BulkOpsClientUtil.getPassword(): Exiting.
    [2012-09-19 14:49:59,828] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Entering.
    [2012-09-19 14:49:59,937] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Sending Host lookup request to: https://sso.crmondemand.com/router/GetTarget
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Host lookup returned: <?xml version="1.0" encoding="UTF-8"?>
    <HostUrl>https://secure-ausomxega.crmondemand.com</HostUrl>
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Successfully extracted Host URL: https://secure-ausomxega.crmondemand.com
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.lookupHostURL(): Exiting.
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Entering.
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Host URL from the Routing app=https://secure-ausomxega.crmondemand.com
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Host URL from config file=https://secure-ausomxeha.crmondemand.com
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Successfully updated the config file: .\config\OracleDataLoaderOnDemand.config
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Host URL set to https://secure-ausomxega.crmondemand.com
    [2012-09-19 14:50:03,953] DEBUG - [main] BulkOpsClientUtil.determineWSHostURL(): Exiting.
    [2012-09-19 14:50:03,953] INFO - [main] Attempting to log in...
    [2012-09-19 14:50:10,171] INFO - [main] Successfully logged in as: XXXX/XXXX
    [2012-09-19 14:50:10,171] DEBUG - [main] BulkOpsClient.doImport(): Execution begin.
    [2012-09-19 14:50:10,171] INFO - [main] Validating Oracle Data Loader On Demand Import request...
    [2012-09-19 14:50:10,171] DEBUG - [main] FieldMappingManager.parseMappings(): Execution begin.
    [2012-09-19 14:50:10,171] DEBUG - [main] FieldMappingManager.parseMappings(): Execution complete.
    [2012-09-19 14:50:11,328] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): Submitting BulkOpImportGetRequestDetail WS call
    [2012-09-19 14:50:11,328] INFO - [main] A SOAP request was sent to the server to create the import request.
    [2012-09-19 14:50:13,640] DEBUG - [Thread-3] SOAPImpRequestManager.sendImportGetRequestDetail(): SOAP request sent successfully and a response was received
    [2012-09-19 14:50:13,640] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): BulkOpImportGetRequestDetail WS call finished
    [2012-09-19 14:50:13,640] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): SOAP response status code=OK
    [2012-09-19 14:50:13,640] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): Going to sleep for 300 seconds.
    [2012-09-19 14:50:20,328] INFO - [main] A response to the SOAP request sent to create the import request on the server has been received.
    [2012-09-19 14:50:20,328] DEBUG - [main] SOAPImpRequestManager.sendImportCreateRequest(): SOAP request sent successfully and a response was received
    [2012-09-19 14:50:20,328] INFO - [main] Oracle Data Loader On Demand Import validation PASSED.
    [2012-09-19 14:50:20,328] DEBUG - [main] BulkOpsClient.sendValidationRequest(): Execution complete.
    [2012-09-19 14:50:20,343] DEBUG - [main] ManifestManager.initManifest(): Creating manifest directory: .\\Manifest\\
    [2012-09-19 14:50:20,343] DEBUG - [main] BulkOpsClient.submitImportRequest(): Execution begin.
    [2012-09-19 14:50:20,390] DEBUG - [main] BulkOpsClient.submitImportRequest(): Sending CSV Data Segments.
    [2012-09-19 14:50:20,390] DEBUG - [main] CSVDataSender.CSVDataSender(): CSVDataSender will use 1 threads.
    [2012-09-19 14:50:20,390] INFO - [main] Submitting Oracle Data Loader On Demand Import request with the following Request Id: AEGA-FX28VK...
    [2012-09-19 14:50:20,390] DEBUG - [main] CSVDataSender.sendCSVData(): Creating thread 0
    [2012-09-19 14:50:20,390] INFO - [main] Import Request Submission Status: Started
    [2012-09-19 14:50:20,390] DEBUG - [main] CSVDataSender.sendCSVData(): Starting thread 0
    [2012-09-19 14:50:20,390] DEBUG - [main] CSVDataSender.sendCSVData(): There are pending requests. Going to sleep.
    [2012-09-19 14:50:20,406] DEBUG - [Thread-5] CSVDataSenderThread.run(): Thread 0 submitting CSV Data Segment: 1 of 1
    [2012-09-19 14:50:24,328] INFO - [Thread-5] A response to the import data SOAP request sent to the server has been received.
    [2012-09-19 14:50:24,328] DEBUG - [Thread-5] SOAPImpRequestManager.sendImportDataRequest(): SOAP request sent successfully and a response was received
    [2012-09-19 14:50:24,328] INFO - [Thread-5] A SOAP request containing import data was sent to the server: 1 of 1
    [2012-09-19 14:50:24,328] DEBUG - [Thread-5] CSVDataSenderThread.run(): There is no more pending request to be picked up by Thread 0.
    [2012-09-19 14:50:24,328] DEBUG - [Thread-5] CSVDataSenderThread.run(): Thread 0 terminating now.
    [2012-09-19 14:50:25,546] INFO - [main] Import Request Submission Status: 100.00%
    [2012-09-19 14:50:26,546] INFO - [main] Oracle Data Loader On Demand Import submission completed succesfully.
    [2012-09-19 14:50:26,546] DEBUG - [main] BulkOpsClient.submitImportRequest(): Execution complete.
    [2012-09-19 14:50:26,546] DEBUG - [main] BulkOpsClient.doImport(): Execution complete.
    [2012-09-19 14:50:26,546] INFO - [main] Attempting to log out...
    [2012-09-19 14:50:31,390] INFO - [main] XXXX/XXXX is now logged out.
    [2012-09-19 14:50:31,390] DEBUG - [Thread-3] ODWSSessionKeeperThread.Run(): Interrupted.
    [2012-09-19 14:50:31,390] DEBUG - [main] BulkOpsClient.main(): Execution complete.

    Hi,
    the Data Loader points by default to the production environment regardless if you download it from staging or production.
    To change the pod edit the config file and input the below content:
    hosturl=https://secure-ausomxeha.crmondemand.com
    routingurl=https://secure-ausomxeha.crmondemand.com
    testmode=debug

  • Sample Data ODM

    where can i get the sample datas for the odm schema ?
    the *.ctl and *.dat files for the sql-loader.
    directory $ORACLE_HOME/dm/demo/sample is not present in my
    insallation of 10g on Linux.
    datamining makes no sense without datas.....

    thanks for your help marko,
    i found the datas but still have no idea what datamining is..i think i have to read some books.

  • How to prevent OLAP from doing unneccessary aggredations during data load?

    Hi,
    I'm trying to create a relatively simple two-dimensional OLAP cube (you might want to call it "OLAP square"). My current environment is 11.2EE with AWM for workspace management.
    One dimension is date, all years->year->month->day, the other one is production unit, implemented as a hierarchy with a certain machine at the bottom level. The fact is defined by a pair of  bottom-level values of these dimensions; for instance, a measure is taken once a day from each machine. I would like to store these detailed facts in a cube together with aggregates, so they could be easily drilled down to without querying the original fact table.
    The aggregation rules are set to "Aggregate from level = default" (which is day and machine respectively) for both of my dimensions, the cube is mapped to fact table with dimension tables, the data is loaded, and the whole thing is working as expected.
    The problem is with the load itself, I noticed it being too slow for my amount of sample data. After some investigation of the issue I found out a query in cube_build_log table, a query the data is actually being loaded with.
    <SQL>
      <![CDATA[
    SELECT /*+  bypass_recursive_check  cursor_sharing_exact  no_expand  no_rewrite */
      T4_ID_DAY ALIAS_37,
      T1_ID_POT ALIAS_38,
      MAX(T7_TEMPERATURE)  ALIAS_39,
      MAX(T7_TEMPERATURE)  ALIAS_40,
      MAX(T7_METAL_HEIGHT)  ALIAS_41
    FROM
      SELECT /*+  no_rewrite */
        T1."DATE_TRUNC" T7_DATE_TRUNC,
        T1."METAL_HEIGHT" T7_METAL_HEIGHT,
        T1."TEMPERATURE" T7_TEMPERATURE,
        T1."POT_GLOBAL_ID" T7_POT_GLOBAL_ID
      FROM
        POTS."POT_BATH" T1   )
      T7,
      SELECT /*+  no_rewrite */
        T1."ID_DIM" T4_ID_DIM,
        T1."ID_DAY" T4_ID_DAY
      FROM
        RI."DIM_DATES" T1   )
      T4,
      SELECT /*+  no_rewrite */
        T1."ID_DIM" T1_ID_DIM,
        T1."ID_POT" T1_ID_POT
      FROM
        RI."DIM_POTS" T1   )
      T1
    WHERE
      ((T4_ID_DIM = T7_DATE_TRUNC)
        AND (T1_ID_DIM = T7_POT_GLOBAL_ID)
        AND ((T7_DATE_TRUNC)  IN  < a long long list of dates for currently processed cube partition is clipped >  ) ) ) 
    GROUP BY
      (T1_ID_POT, T4_ID_DAY) 
    ORDER BY
      T1_ID_POT ASC NULLS LAST ,
      T4_ID_DAY ASC NULLS LAST ]]>>
    </SQL>
    Notice T4_ID_DAY,  T1_ID_POT in the top level column list - these are bottom-level identifiers of my dimensions, which means the query isn't actually doing any aggregation here, as there is only one fact per each pair of (ID_DAY, ID_POT).
    What I want to do is to somehow load the data without doing this (totally useless in my case) intermediate aggregation. Basically, I want it to be something like
    SELECT /*+  bypass_recursive_check  cursor_sharing_exact  no_expand  no_rewrite */
      T4_ID_DAY ALIAS_37,
      T1_ID_POT ALIAS_38,
      T7_TEMPERATURE  ALIAS_39,
      T7_TEMPERATURE  ALIAS_40,
      T7_METAL_HEIGHT  ALIAS_41
    FROM etc...
    without any aggregations. In fact, I can live even with this loading query, as the amounts of data are not that large, but I want things to work the right way (more or less ).
    Any chance to do it?
    Thanks.

    I defined a primary key over all the dimension keys in the fact table but for some reason the build still uses an aggregation. Probably because the aggregation operator for the cube I'm currently playing with is actually set, and I don't see a way to undefine it from the UI toolbar you are referring to. This is a piece of mapping section from the workspace file I exported using AWM.:
    <CubeMap
          Name="MAP1"
          IsSolved="False"
          Query="POT_BATH_T"
          AggregationMethod="SUM">
    Looks like the build aggregates because it is clearly told to so by the AggregationMethod attribute? Any way to override it?

  • Oracle Database Table data Load it into Excel

    Hello All,
    Please I need your help for this problem:
    I need to load Oracle database Table data and load it into Excel and saved with xls format.
    Example -Select * from Slase data load it into the Excel.
    I appreciate ans sample code to help me do that, Please help me out. This is very urgent.
    Thanks alot and best regards,
    anbu

    >
    I need to load Oracle database Table data and load it into Excel and saved with xls format.
    Example -Select * from Slase data load it into the Excel.
    I appreciate ans sample code to help me do that, Please help me out. This is very urgent.
    >
    Nothing in these forums is 'urgent'. If you have an urgent problem you should contact Oracle support or hire a consultant.
    You have proven over and over again that you are not a good steward of the forums. You continue to post questions that you say are 'urgent' but rarely take the time to mark your questions ANSWERED when they have been.
    Total Questions: 90 (78 unresolved)
    Are you willing to make a commitment to to revisit your 78 unresolved questions and mark them ANSWERED if they have been?
    The easiest way to export Oracle data to Excel is to use sql developer. It is a free download and this article by Jeff Smith shows how easy it is
    http://www.thatjeffsmith.com/archive/2012/09/oracle-sql-developer-v3-2-1-now-available/
    >
    And One Last Thing
    Speaking of export, sometimes I want to send data to Excel. And sometimes I want to send multiple objects to Excel – to a single Excel file that is. In version 3.2.1 you can now do that. Let’s export the bulk of the HR schema to Excel, with each table going to it’s own workbook in the same worksheet.
    >
    And you have previously been ask to read the FAQ at the top of the thread list. If you had done that you would have seen that there is a FAQ for links that have many ways, with code, to export data to Excel.
    5. How do I read or write an Excel file?
    SQL and PL/SQL FAQ

  • Oracle Reports Developer 6i - sample data tables

    I downloaded and installed all the components of Oracle Reports 6i on my PC. When I look at the Oracle Reports Developer manual (building reports.pdf) it is referring to a database tables like stocks, stock_history etc. I am trying to learn how to use the tool using this manual. Aren't the sql scripts that create those tables and load the sample data a part of the installation. Am I missing anything. I will appreciate any input. Again, I did a full install by selecting all the components and not just a typical install.
    Thanks in advance.

    Hi Suresh
    Try locating these tables in scott schema. If you don't find it there, try locating the scripts to create this table in Oracle Database installation folder.
    Regards
    Sripathy

Maybe you are looking for

  • Addition of a field in ODS

    Hello            Can anybody tell me how can i add a field to a ODS. Regards Tarun

  • [solved] resolutions missing after update to nvidia 302.17-1

    hello all, when i updated to the new nvidia proprietary drivers (302.17-1) all my resolutions went away except my max (1920x1200). here is my xorg.conf in its current state  xorg.conf so far i have tried  Option "UseEDIDFreqs" "false" that didn't do

  • Trying to delete multiple contacts at once on iPhone

    I wanted a large (~200) subset of my contacts (within Outlook) to be added to my iPhone, so I used the iTunes Contact Sync to get all of my contacts there (rather than type in 200 with the iPhone keyboard, which leaves something to be desired). Now I

  • Database homepage doesn't open on installation

    Hi, I installed Oracle Database 10g Express Edition in my machine. As per the getting started instructions, I need to go to database homepage but when i do this I get an error saying page cannot be displayed. I have done nothing except installing Ora

  • My Function Keys which are F7 F8 F9 do not work

    After I have installed Snow Leopard I opened Itunes,i pressed the F8 button which is play it did not work,pressed F9 which is the fast forward or next song didnt work,I tried playing a couple of songs and pressed the F8 which is previous didnt work a