Data Loading Wizard loses Column Mapping

I created pages of APEX page type "Data Loading." They are being used to populate Staging tables and worked terrific in my existing environment. However; I migrated the DB using export and imported it into a new environment. Similarly, I exported my APEX Application and imported it into the new environment. It appears that the second page of Data Loading Wizard has lost the mapping of which table it is loading into. On every column in the second page the only option for the column name is "Do Not Load". What do I need to move over, so that my Data Loading Wizard pages remembers the column names for the table they were mapped to in the old environment?

Hi David,
You didn't mention what version of APEX you are using. Their was an issue in 4.1 in the export file where the owner wasn't being updated:
wwv_flow_api.create_load_table(
p_id =>3570143708960669327+ wwv_flow_api.g_id_offset,
p_flow_id => wwv_flow.g_flow_id,
p_name =>'Load Employees',
p_owner =>'PAUL',
p_table_name =>'EMP',
p_unique_column_1 =>'EMPNO',
p_is_uk1_case_sensitive =>'N',
p_unique_column_2 =>'',
p_is_uk2_case_sensitive =>'N',
p_unique_column_3 =>'',
p_is_uk3_case_sensitive =>'N',
p_wizard_page_ids =>'',
p_comments =>'');Try modifying the p_owner =>'PAUL' line in your export file and change it to the name of the schema your importing it into.
Thanks
Paul

Similar Messages

  • Modify the Data Load wizard to remove the check for unique columns

    Hi,
    I'm trying to make a Data Loading Wizard into my application, which basically works nice except for one thing:
    I want to remove the check for unique columns, and insert ALL data which the users wants to upload.
    (Like the Data load utility under development).
    The reason is, that the data I want to upload is an export from my bank statements (in XLS / CSV format)
    And these exports don't have any primary key.
    Using the Apex wizard created Data Upload pages, I will loose data in case I had two identical payments in a day.
    Is there a way to do this, while keeping the pages created by the wizard to make Data Loading pages?
    apex 4.2.1

    I would suggest to load the data into a view and process the records into the real table(s) with instead-of-triggers. When you add a "1=2" where clause to the view, there never is any data and everything will be loaded.
    Otherwise you have to have a "real" PK value, like a timestamp or a statement_nr+line_nr.

  • 4.2.3/.4 Data load wizard - slow when loading large files

    Hi,
    I am using the data load wizard to load csv files into an existing table. It works fine with small files up to a few thousand rows. When loading 20k rows or more the loading process becomes very slow. The table has a single numeric column for primary key.
    The primary key is declared at "shared components" -> logic -> "data load tables" and is recognized as "pk(number)" with "case sensitve" set to "No".
    While loading data, these configuration leads to the execution of the following query for each row:
    select 1 from "KLAUS"."PD_IF_CSV_ROW" where upper("PK") = upper(:uk_1)
    which can be found in the v$sql view while loading.
    It makes the loading process slow, because of the upper function no index can be used.
    It seems that the setting of "case sensitive" is not evaluated.
    Dropping the numeric index for the primary key and using a function based index does not help.
    Explain plan shows an implicit "to_char" conversion:
    UPPER(TO_CHAR(PK)=UPPER(:UK_1)
    This is missing in the query but maybe it is necessary for the function based index to work.
    Please provide a solution or workaround for the data load wizard to work with large files in an acceptable amount of time.
    Best regards
    Klaus

    Nevertheless, a bulk loading process is what I really like to have as part of the wizard.
    If all of the CSV files are identical:
    use the Excel2Collection plugin ( - Process Type Plugin - EXCEL2COLLECTIONS )
    create a VIEW on the collection (makes it easier elsewhere)
    create a procedure (in a Package) to bulk process it.
    The most important thing is to have, somewhere in the Package (ie your code that is not part of APEX), information that clearly states which columns in the Collection map to which columns in the table, view, and the variables (APEX_APPLICATION.g_fxx()) used for Tabular Forms.
    MK

  • Data Loading Wizard in Apex 4.1.1

    I am having an issue utilizing the out of the box APEX Data loading wizard.
    My data (separated by \t) has double quotes (") in them and no variation of "Enclosed by" or "Separated by" item settings is allowing the "Parse Uploaded Data" process to correctly parse the information.
    Am I missing something obvious ? Has anyone got it to work with data that does contain double quotes?

    Hi Usul,
    I am able to parse data with double quotes and separated by a tab (\t). can you double check if your file is tab separated and contains double quotes ? or would you share the file with me so that i can check what is going on ?
    Regards,
    Patrick

  • Data Load Wizard not Inserting/Updating all rows

    Hello,
    I am able to run through the whole Data Load Wizard without any problems. It reports that it successfully inserted/updated all the rows, but when I look in the table, I find a few rows that were not updated correctly. Of the entries I've identified that don't get inserted/updated properly, I've noticed they are the same rows that I was having issues with earlier. The issue was a number format error, which I solved by providing an explicit number format.
    Is it possible that the false inserts/updates might still be tied to the number format, or are there other reasons why the data load is failing on only some rows.
    Thanks,
    Brian
    Edited by: 881159 on Mar 14, 2012 5:05 PM

    Hi Brian,
    I am not aware of the situation where you get false results. However, there were some issues with number/date formats that sometime were not properly parsed, and this has been fixed in 4.1.1 patch. would your case be different from the one described in bug 13656397, I will be happy to get more details so that I can take a look at what is going on.
    Regards,
    Patrick

  • USING DATA LOADING WIZARD

    All,
    I writing a data load process to copy&paste or upload files all is good. but now i want to bypass the step for column mapping (run it in the background/ move that copy to step1) so a user doesn't see it while loading the data by default the stages are: (
    Data Load Source
    Data / Table Mapping
    Data Validation
    Data Load Results
    so i want to run the 2nd step in the background (or if i can move that function and combine with step 1)...any help? 
    apex 4.2
    thanks.

    Maybe consider page branches on the relevant page, or the plugin
    - Process Type Plugin - EXCEL2COLLECTIONS

  • Data Load - Data / Table Mapping Columns Not Showing

    Hi,
    Using the new 4.1 functionality I have created a data load wizard.
    Everything seems to have created OK however when i run the wizard and get to the Data / Table Mapping stage (2nd page) the column names lov contains no list of values.
    The table i am trying to load in is in another schema but all the grants are there so that schema can view it.
    Any idea what I need to do or what I have missed so the columns can be viewed.
    Thanks in advance.

    Hi,
    You have to log in as admin and there should be an option to add schemas, once adding it should appear in the lov.
    This link should help.
    Parsing Schema

  • Data Load function - max number of columns?

    Hello,
    I was able to successfully create a page with Data Load Wizard. I've used this page wizard before with no issues, however in this particular case I want to load a spreadsheet with a lot of columns (99 to be precise). When I run the page, it uploads the spreadsheet into the proper table, but only the first 45 columns. The remaining 56 columns are Null for all rows. Also, there are 100 rows in the spreadsheet and it doesn't load them all (it loads 39).
    Is there a limit to the number of columns it can handle?
    Also, when I re-upload the same file, the data load results show that it Inserted 0 rows, Updated 100 rows, Failed 0 zeros. However there are still only a total of 39 rows in the table.
    Thoughts?
    Steve

    Steve wrote:
    FYI, I figured out why it wasn't loading all 100 rows. Basically I needed to set two dependent columns in the load definition instead of one; it was seeing multiple rows in the spreadsheet and assumed some were the same record based on one column. So that part is solved...
    I still would like feedback on the number of columns the Data Load Wizard can handle, and if there's way to handle more than 45.The Data Load Wizard can handle a maximum of 46 columns: +{message:id=10107069}+

  • Data load component - add new column name alias

    Apex 4.2.2.
    Using the data load wizard a list of column and column name aliases had been created.
    When looking at the component (shared components/Data load tables/column name aliases) it is possible to edit and delete a column alias there. Unfortunatly it does not seem possible to add a new alias. Do I overlook something, or is there a workaround for this?

    Try this:
    REPORT ztest LINE-SIZE 80 MESSAGE-ID 00.
    DATA: name_int TYPE TABLE OF v_usr_name WITH HEADER LINE.
    DATA: BEGIN OF it_bkpf OCCURS 0.
            INCLUDE STRUCTURE bkpf.
    DATA:   username LIKE v_usr_name-name_text,
          END   OF it_bkpf.
    SELECT * FROM bkpf INTO TABLE it_bkpf UP TO 1000 ROWS.
    LOOP AT it_bkpf.
      name_int-bname = it_bkpf-usnam.
      APPEND name_int.
    ENDLOOP.
    SORT name_int BY bname.
    DELETE ADJACENT DUPLICATES FROM name_int.
    LOOP AT it_bkpf.
      READ TABLE name_int WITH KEY
        bname = it_bkpf-usnam
        BINARY SEARCH.
      IF name_int-name_text IS INITIAL.
        SELECT SINGLE name_text
          FROM v_usr_name
          INTO name_int-name_text
          WHERE bname = it_bkpf-usnam.
        MODIFY name_int index sy-tabix.
      ENDIF.
      it_bkpf-username = name_int-name_text.
      MODIFY it_bkpf.
    ENDLOOP.
    Rob

  • ERPi Data load mapping Issue

    Hi,
    We are facing issue with ERPi data load mappings issue. Mapping file (txt file) has 36k records, whenever we are trying to load mappings, it's taking very long time, nearly 1 hour 30mins. but we want to reduce that time. is there any way to reduce data load mapping time??
    Hyperion verion: 11.1.2.2.300
    Please help, thanks in advance!!
    Thanks.

    Any one face the same kind of issue??

  • Column Mapping while loading data via tab delimited text file

    While attempting to load data I get an error message at the "Define Column Mapping" step. It reads as follows:
    1 error has occurred
    There are NOT NULL columns in SYSTEM.BASICINFO. Select to upload the data without an error
    The drop down box has the names of the columns, but I don't know how to map them to the existing field names, I guess.
    By the way, where can I learn what goes in "format" at this same juncture.
    Thanks!

    You can use Insert Into Array and wire the column index input instead of the row index as shown in the following pic:
    Just be sure that the number of elements in Array2 is the same as the number of rows in Array1.
    Message Edited by tbob on 03-07-2006 11:32 AM
    - tbob
    Inventor of the WORM Global
    Attachments:
    Add Column.png ‏2 KB

  • Column number limitation in apex 4.1 data loader?

    Hi all!
    Is there a limitation of column numbers in the APEX 4.1 data loading page?
    My DB Object has 59 columns and they are all available for example in the unique colum drop boxes of my data load table definition.
    On page two of the wizard created data load pages 'data/table mapping' only 45 columns are shown. These columns are correctly inserted into my table. The last 14 columns are ignored.
    So does anyone know if there is a limitation and can it be extended?
    Thanks for any answer and regards
    Kai

    No, I do not have a solution for it.
    Splitting the file into fewer columns each, with the primary key repeated and then stitching the tables up post upload might be easier than using other routes.
    And then there are always the good old SQL Loader and External Tables. But integrating these into Apex is not easy as Apex runs on server and the file is typically on the local HDD of client.
    Regards,

  • Problem converting static data load mapping to MOLAP

    Hi
    as a prototyping exercise I am converting some of our ROLAP dimensions and corresponding data load mappings (1 static data i.e. "-1" id to handle unknowns in fact data, and 1 for real data coming from a table) to MOLAP.
    The dimension itself converts and deploys correctly and the real data mapping also redeploys and executes correctly.
    HOWEVER
    my static data mapping will not execute successfully.
    The mapping uses constants (ID = -1, NAME 'UNKNOWN' etc), not all attributes are linked (this has been tried). My column WH_ID which was the ROLAP surrogate key gets converted to VARCHAR2 as expected. Mapping does deploy cleanly.
    The error i get is below. I have been banging my head on this for a couple of days and tried searching the Net, Metalink to no avail. I'm hoping someone out there can help
    LOAD_STATIC_D_TRADER_IU
    Warning
    ORA-20101: 15:48:04 ***Error Occured in BUILD_DRIVER: In __XML_SEQUENTIAL_LOADER: In __XML_LOAD_ATTRS: Error loading attributes for hierarchy, D_TRADER.AW$NONE.HIERARCHY, level D_TRADER.TRADER.LEVEL, mapping group D_TRADER.TRADER.MAPGROUP1.DIMENSIONMAPGROUP. In __XML_LOAD_ATTRS_ITEM: In ___XML_LOAD_TEMPPRG: The SQL IMPORT command cannot convert from the TEXT type to the DECIMAL type.
    TRUNCATE_LOAD=false
    AW Execution status: Success
    15:48:00 Started Build(Refresh) of MARTS Analytic Workspace.
    15:48:00 Attached AW MARTS in RW Mode.
    15:48:01 Started Loading Dimensions.
    15:48:01 Started Loading Dimension Members.
    15:48:01 Started Loading Dimension Members for D_TRADER.DIMENSION (1 out of 1 Dimensions).
    15:48:03 Finished Loading Members for D_TRADER.DIMENSION. Added: 1. No Longer Present: 885.
    15:48:03 Finished Loading Dimension Members.
    15:48:03 Started Loading Hierarchies.
    15:48:03 Started Loading Hierarchies for D_TRADER.DIMENSION (1 out of 1 Dimensions).
    15:48:03 Finished Loading Hierarchies for D_TRADER.DIMENSION. 1 hierarchy(s) STANDARD Processed.
    15:48:03 Finished Loading Hierarchies.
    15:48:03 Started Loading Attributes.
    15:48:03 Started Loading Attributes for D_TRADER.DIMENSION (1 out of 1 Dimensions).
    15:48:04 Failed to Build(Refresh) MARTS Analytic Workspace.
    15:48:04 ***Error Occured in BUILD_DRIVER: In __XML_SEQUENTIAL_LOADER: In __XML_LOAD_ATTRS: Error loading attributes for hierarchy, D_TRADER.AW$NONE.HIERARCHY, level D_TRADER.TRADER.LEVEL, mapping group D_TRADER.TRADER.MAPGROUP1.DIMENSIONMAPGROUP. In __XML_LOAD_ATTRS_ITEM: In ___XML_LOAD_TEMPPRG: The SQL IMPORT command cannot convert from the TEXT type to the DECIMAL type.

    Hi this looks like a bug in set based mode with using numeric dimension attributes and loading them from a constant. Row based mode is OK, which stages the data before loading the AW, but you probably don't want this.
    A workaround is to add an expression operator in the map. You will have to add a link from a source table/constant into the expression operator to satisfy the map analyser. But then you can add expressions such as your numeric attributes in the expression operator's output group, define the values for each expression and map these expression outputs (not the numeric constants) into your dimension. Hopefully this makes sense.
    Cheers
    David

  • Data loading: formatting data for timestamp column

    Hi All,
    I have a table with a timestamp column named as created_date. I want to upload data to that table using data loading page. but there is one problem while uploading data, I have a csv file in which the created_date column data in two different format as follows ,
    09/03/2013 03:33am
    09/02/2013 03:24pm
    the above data throws an error ORA-01821: date format not recognized.
    In Data / Table Mapping page, I tried with MM/DD/YYYY HH12:MI:SS AM. What format should i use for am and pm??
    Please help me to solve....
    Thanks in advance
    Lakshmi

    I solved by using the format MM/DD/YYYY HH:MIAM.
    Thanks
    Lakshmi

  • Loading data into a CLOB column

    I need to find out how to load about ten sentences of data into a clob column for a table in the database. I have a pl/sql procedure that loads data from an xml file into various tables in the the database. Recently, we added a column (test_dummy) to one of the tables and defined it as a CLOB. There is a corresponding node (detail_info) in the XML file that maps to this column. I need to figure out how to incorporate this in the pl/sql procedure so that the data in the XML file for the node (detail_info) is loaded into "test_dummy". Any ideas?

    Take it one at a time. Use 'extract' function to extract an XML snippet from a given XML. The question couldn't be more vague. Maybe an example would help?
    Rahul

Maybe you are looking for