Using spreadsheets to load new data to system

Ok, I am GREEN as green can be here. I am a manager of a group of users that are having a mainframe system re-built in APEX and we have a requirement to load data in to the new system from both an excel sheet as well as a Janus unit barcode reader. Well, the developers are saying this cant be done and that the REXX and CSP of our old system was more powerful than this new APEX.
Can we do what our requirement is or are the developers right on and APEX cant support external data loads from these sources?
TIA,
CPhilip

Ok, I am GREEN as green can be here. I am a manager of a group of users that are having a mainframe system re-built in APEX and we have a requirement to load data in to the new system from both an excel sheet as well as a Janus unit barcode reader. Well, the developers are saying this cant be done and that the REXX and CSP of our old system was more powerful than this new APEX.
Can we do what our requirement is or are the developers right on and APEX cant support external data loads from these sources?
TIA,
CPhilip

Similar Messages

  • Can we use LSMW to load Transaction data

    Hi.
    Can we use LSMW to load transactional data.Or it is used only for Master Data.
    With Best Regards
    Mamatha

    Hi,
    you can do upload of transactional data using,
    Standard Programs,
    Batch Input Recording,
    BAPI method,
    IDoc,
    best regards,
    Thangesh

  • Large Schema file registered and using folders to load the data (WebDAV)

    I have a very large schema file that I have registered using binary (Schema file > 30000 lines).
    My data has been in MarkLogic and it is organized in a folder structure providing reuse of pieces of the xml.
    It seemed the most logical way to load the data was to create the folder structure in XDB and move the files via WebDAV. We also have extensive XQueries written.
    My question is how do I query the data back our when it is loaded this way. I have read and experimented with resource_view but that is not going to get me what I need. I would like to make use of my xqueries that I have.
    Can i do this, and if so, HOW??

    Sure. Let me lay out a with some more specific information.
    My schema has an overall root level of "Machine Makeup".
    All of these items are defined under it with a lot of element reuse and tons of attribute groups that are used throughout the xml schema. I can do a lot of things but what I cannot do is change the structure of the schema.
    The data is currently in a "folder structure" that more closely resembles the following. I have tried to annotate number of files and keep in mind all of these are working documents and additional "record" (xml files) can be added.
    Composite - contains 12 folders and each of these folders contain xml documents
    compfolder 1
    - compfolder 12
    Most of these folders contain < 200 .xml files (each with id.xml) as the name of the file however one of these directories currently contains 1546 files. They all belong... no real way to split them up further.
    At the same level as composite
    About half of these folders at this level contain > 1000 but less than 3000.
    Like
    PartsUse > 1000 but less than 3000
    transfer of parts 8000 and can continue to grow
    people > 3000 and these would be used in the transfer of parts like "who sold it" Everytime someone new a new one is added etc.
    There are about 12 folders at this level.
    Now, the way the system works is a new person that is not in our list is involved the users get a "new" empty xml document to fill in and we assign it an id and place it in the folder.
    So it would something like this
    Composite
    folder1 - 1000 xml file
    folder 2 - 200 xml files
    folder 12 - < 2000
    Locations
    contains < 2000 xml files
    Activities < 3000 xml files
    PartsTransfer 8000 xml files and growing
    materials < 1000 xml files
    setup < 1000 xml files
    people 3000 xml files and growing
    groupUse < 1000
    and so forth.
    All of the folders contain the links I had previously layed out
    So Activities would have materialLink id = "333" peopleLink ="666"
    And so of and so forth.
    So because my file numbers are > than the optimum about half the time, how would I have 3 separate xmltype tables (I understand mine would be more than 3...)the one schema file that is intertwined with others. Can you show me an example. Would I annotate it in the schema at just those levels. This schema file is so huge I would not want to have to annotate all elements with a table. Can I pick and choose to mimic the folder structure?
    THANKS SO MUCH for your help. I really want to show that Oracle is the way. I can implement it with a simpler structure but not the size of complexity that I need. I haven't been able to find a lot of examples with the new binary registration.
    Edited by: ferrarapayne on Nov 16, 2011 1:00 PM

  • Data Extraction and ODS/Cube loading: New date key field added

    Good morning.
    Your expert advise is required with the following:
    1. A data extract was done previously from a source with a full upload to the ODS and cube. An event is triggered from the source when data is available and then the process chain will first clear all the data in the ODS and cube and then reload, activate etc.
    2. In the ODS, the 'forecast period' field was now moved from data fields to 'Key field' as the user would like to report per period in future. The source will in future only provide the data for a specific period and not all the data as before.
    3) Data must be appended in future.
    4) the current InfoPackage in the ODS is a full upload.
    5) The 'old' data in the ODS and cube must not be deleted as the source cannot provide it again. They will report on the data per forecast period key in future.
    I am not sure what to do in BW as far as the InfoPackages are concerned, loading the data and updating the cube.
    My questions are:
    Q1) How will I ensure that BW will append the data for each forecast period to the ODS and cube in future? What do I check in the InfoPackages?
    Q2) I have now removed the process chain event that used to delete the data in the ODS and cube before reloading it again. Was that the right thing to do?
    Your assistance will be highly appreciated. Thanks
    Cornelius Faurie

    Hi Cornelius,
    Q1) How will I ensure that BW will append the data for each forecast period to the ODS and cube in future? What do I check in the InfoPackages?
    -->> Try to load data into ODS in Overwrite mode full update asbefore(adds new records and changes previous records with latest). Pust delta from this ODS to CUBE.
    If existing ODS loading in addition, introduce one more ODS with same granularity of source and load in Overwrite mode if possible delta or Full and push delta only subsequently.
    Q2) I have now removed the process chain event that used to delete the data in the ODS and cube before reloading it again. Was that the right thing to do?
    --> Yes, It is correct. Otherwise you will loose historic data.
    Hope it Helps
    Srini

  • Using sqlldr to load CLOB data from DB2

    I am stuck trying to resolve this problem. I am migrating data from DB2 to Oracle. I used DB2 export to extract the data specifying lobsinfile clause. This created all the CLOB data in one file. So a typical record has a column with a reference to the CLOB data. "OUTFILE.001.lob.0.2880/". where OUTFILE.001.lob is the name specified in the export command and 0 is the starting position in the file and 2880 is the length of the first CLOB.
    When I try to load this data using sqlldr I'm getting a file not found.
    The control file looks something like this:
    clob_1 FILLER char(100),
    "DETAILS" LOBFILE(clob_1) TERMINATED BY EOF,
    I'm using Oracle 11gR2 and DB2 9.7.5
    Your help is appreciated.

    OK..here are additional details. Some names have changed but the idea is the same. Also the sqlldr is executing in the same directory as the data files and the control file
    Primary data file is VOIPCACHE.dat Secondary datafile (file with lob data) is VOIPCACHE.001.lob
    Control Fileload data
    infile 'VOIPCACHE.dat'
    badfile 'VOIPCACHE.bad'
    discardfile 'VOIPCACHE.dsc'
    replace into table VOIPCACHE
    fields terminated by ',' optionally enclosed by '"' TRAILING NULLCOLS
    (KEY1 "rtrim(:KEY1)",
    FIELD8,
    clob_1 FILLER char (100),
    "DATA" LOBFILE(clob_1) TERMINATED BY EOF)
    Snippet from Log file
    IELD7 NEXT * , O(") CHARACTER
    FIELD8 NEXT * , O(") CHARACTER
    CLOB_1 NEXT 100 , O(") CHARACTER
    (FILLER FIELD)
    "DATA" DERIVED * EOF CHARACTER
    Dynamic LOBFILE. Filename in field CLOB_1
    SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.0.0/' for field "DATA" table VOIPCACHE
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.0.47/' for field "DATA" table VOIPCACHE
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.47.47/' for field "DATA" table VOIPCACHE
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.94.58/' for field "DATA" table VOIPCACHE
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.152.58/' for field "DATA" table VOIPCACHE
    SQL*Loader-553: file not found
    SQL*Loader-509: System error: No such file or directory
    SQL*Loader-502: unable to open data file 'VOIPCACHE.001.lob.210.206/' for field "DATA" table VOIPCACHE
    This is repeated for each record
    sqlldr command
    sqlldr userid=${SCHEMA}/${PASSWD}@$ORACLE_SID control=${CTLDIR}/${tbl}.ctl log=${LOGDIR}/${tbl}.log direct=true errors=50
    I dont think the variables are important here
    -EC

  • Using CRM_CIC_CTI_LOAD to load CTI Data from AMC

    Can anyone point me to any documentation on how to use CRM_CIC_CTI_LOAD to load CTI statistics into CRM.  We are using AMC as middleware to integrate an Cisco VOIP to CRM5.0.  I am looking at ways to get the CTI data to BI7.0 and starting with the standard content for 0CRM_CTI1.
    Any help is greatly appreciated.
    Regards,
    KB

    Hello Kevin,
    Have a look at these documents,
    [mySAP Customer Relationship Management|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f2910623-0c01-0010-de8f-d1926988a986]
    [Interaction Center Statistics|http://help.sap.com/saphelp_crm40/helpdata/en/bf/87357d7a964cf79368f597cbfb36d1/content.htm]
    [Customer Interaction Center in CRM |http://media.twango.com/m1/original/0078/3992b9c9d9264d5e833054013ec02cd9.pdf]
    Thanks
    Chandran

  • Unable to use sqlloader to load large data into a varchar2 column..plz.help

    Hi,
    Currently the column in the database is a varchar2(4000) coulmn that stores data in the form of xml and data that has many carrriage returns.
    Current I am trying to archive this column value and later truncate the data in this column.
    I was able to create a .csv file but when using the sqlloder ,the utility fails as the data I am about to load has data stored both in the form of "xml" sometimes and sometimes as a list of attributes separated by newline character.
    After several failed attempts... I would like to know if it can be achieved using sqlloader or if sqlloader is a bad choice and i should go for an import/export instead?
    Thanks in advance...
    -Kevin

    Currently the column in the database is a
    varchar2(4000) coulmn that stores data in the form of
    xml and data that has many carrriage returns. Can I ask why your XML data has carriage returns in it? The nature of XML dictates that the structure is defined by the tags. Of course you can have CR's in your actual data between those tags, but if someone is hard coding CR's into the XML to make it look pretty this in not how XML was intended to be used.
    I was able to create a .csv file but when using the
    sqlloder ,the utility fails as the data I am about to
    load has data stored both in the form of "xml"
    sometimes and sometimes as a list of attributes
    separated by newline character.It probably can be (can you provide a sample of data so we can see the structure) but you would probably be best revisiting the code that generates the CSV and ensure that it is output in a simpler format.

  • My iMac hard drive died-how do I use Time Capsule to re-load new data to new Mac?

    My old system was a Mid-2007 iMac running Lion. I'd like to set up a Mac Mini with the data from the old iMac. What options do I have?

    use the migration assistant, which should be in the utilities folder of the mini.  follow the prompts, choosing the restore from TC backup.  If I recall correctly, it should ask you if you want the mini to inherit the backup from the imac. 

  • Need info on using external tables load/write data

    Hi All,
    We are planning to load conversion/interface data using external tables feature available in the Oracle database.
    Also for outbound interfaces, we are planning to use the same feature to write data in the file system.
    If you have done similar exercise in any of your projects, please share sample code units. Also, let me know if there
    are any cons/limitations in this approach.
    Thanks,
    Balaji

    Please see old threads for similar discussion -- http://forums.oracle.com/forums/search.jspa?threadID=&q=external+AND+tables&objID=c3&dateRange=all&userID=&numResults=15&rankBy=10001
    Thanks,
    Hussein

  • Using VBA for loading Query data into Excel workbook

    Hi all.
    I want simply load data from BEx query into Excel Wortksheet using VBA because of report formats and footer section at the end of the results.
    Any code examples, tutorials, comments, suggestions will be regarded.
    thanx in advance,
    Gediminas

    The difficalty is that I don't know the number of rows report will return. And I need my footer only on LAST page of workbook.
    Another thing I can't imagine how to do by using standart BEx functionality is to design complex column header set (merged columns, sub-columns and etc.).

  • Loading new data in cfwindow

    Here's the problem.
    Have a page that lists a bunch of records. Each record has a
    link to edit the record and I've set the code up to add the record
    ID to the url of the page that will open up in cfwindow:
    <a href="javascript:;"
    onClick="ColdFusion.Window.create('edit', 'Top Priority',
    'edittoppriority.cfm?id={intltcwptoppriorityid}&island=<cfoutput>#intislandid#</cfoutput> ',
    {x:100,y:50,height:550,width:1000,modal:false,closable:true,
    draggable:true,resizable:true,center:true,initshow:true,
    minheight:200,minwidth:200})">Edit</a>
    Code runs fine and when I clik the Edit link, the window
    opens and the record in edittoppriority.cfm is displayed and I can
    update it.
    After I close the cfwindow and go to a new record and click
    the Edit link I get the previous record loaded in cfwindow.
    If I refresh the page and click the the same record, I get
    the information for that record.
    So...How do I get cfwindow to flush out the variables that
    were sent so that it uses the new variables next time a link is
    selected.
    Hoping someone can help me out before I have to move on and
    go with the old-fashined pop-up window.
    Thanks

    That is one way to fix it, but you will end up with a bunch
    of cfwindows loaded in memory.
    Another way to do it is using the javascript functions
    ColdFusion.navigate() to reload another page in the same window and
    ColdFusion.Window.show() and ColdFusion.Window.Hide() to open and
    close the window.

  • I can no longer compose using my exisiting e mail data base system does not pull up existing e mail addresses

    Suddenly when I compose an e mail the system no longer recognizes my existing e mail data base or e mails I have sent to clients on many occasions. I have to manually pull up the e mail copy and paste it into the send section. This does not happen when I use my g mail on internet explorer.

    Hi Olivier,
    Thanks very much for that. it worked on my powerbook(running Tiger). I went to the link and followed the instructions and it worked. Did i mention that my dad who has a iMac running Leopard now has the same problem. I thought i would just do the same but the Preference in Mail for Leopard is of course different. There is no icon called "Server Settings" under Accounts section like there is on mine. Do you have a link for the same kind of instructions for Leopard?
    Thanks again,
    Bryce

  • Load new data only when needed

    I am trying to reduce the number of unnecessary load request to the server. I have this:
    var ds1 = new Spry.Data.XMLDataSet("test2.php", "export/row", {sortOnLoad: "grade_id", sortOrderOnLoad: "ascending", distinctOnLoad: true, useCache: false, loadInterval: 2000});
    But I want to update regions associated with this dataset only when i click a button, that way i dont kill the server , what is the best way to do that.

    Call ds1.loadData(); when you click the button.

  • Help - Fatal Error When Loading New Updates of System Software

    Help please!!
    I have a 8120 and just logged in to sync ( as I do weekly) and was alerted to a Latest Update of Software, so I clicked yes etc. and it all started well, then when the software was part way through the install - suddenly it issued me with a Fatal Error Please try to reload the software message .
    Now I have no way of connectining to the BB as it appears I am now without any software (just a circle with a line through and the numbers 507 on the screen)
    Can anyone suggest what to do please?
    thanks
    Gavin

    Hi and Welcome to the Forums!
    Ouch...that sounds not so good. Here is a KB concerning the 507 code:
    KB02792 Error message "507" appears on the BlackBerry smartphone while upgrading BlackBerry Device Software
    Good luck and let us know!
    Occam's Razor nearly always applies when troubleshooting technology issues!
    If anyone has been helpful to you, please show your appreciation by clicking the button inside of their post. Please click here and read, along with the threads to which it links, for helpful information to guide you as you proceed. I always recommend that you treat your BlackBerry like any other computing device, including using a regular backup schedule...click here for an article with instructions.
    Join our BBM Channels
    BSCF General Channel
    PIN: C0001B7B4   Display/Scan Bar Code
    Knowledge Base Updates
    PIN: C0005A9AA   Display/Scan Bar Code

  • Loading incremental data

    Hi,
    I am using 11.1.0.7 DB with 11.1.0.7B AWM. I would like to load data incrementally.
    Some time my fact data contains tuples which are loaded before (corrections/re statements) and when i load incrementally then it just replaces the existing tuple from the cube.
    Is there a better way to do an incremental load other than getting all the revelant data(from historic tables) and then using group by and then loading it?
    Thanks,

    The term "incremental" has two common meanings in the context of loading data into a cube. First, it could refer to loading a subset of records from source tables. For example, a fact table has data for years 2005 - 2010 and data is added daily. The goal of an incremental load might be to load only those records that we added to or updated in the fact table yesterday (e.g., '29-MAR-2010'). Solutions (1), (2) and (3) apply to that situation.
    "Incremental" might also be used to describe a situation where data read during a load changes, rather than replaces, data in the cube. For example, data such the following already exists in the cube:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 100.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 150.00
    and the following data is added to the fact table (and these are the only records for these time, product and customer values):
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 15.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 -25.00
    And the intent is to have data appear as follows in the cube:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 115.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 125.00
    What you need to know is that data read from a table always replaces the data that exists in the cube. So, if you just load from the fact table into the cube the data will be:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 15.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 -25.00
    There are two things that you could do that would yield the following data in the cube:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 115.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 125.00
    A) You could load the following records from the fact table directly into the cube.
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 115.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 125.00
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 15.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 -25.00
    The SQL used to load the cube can do a SUM .... GROUP BY. The net result will be:
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 115.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 125.00
    (I think you might need to map the cube with joins to get the sum ... group by. Be sure to check the SQL in the cube_build_log to make sure you are getting the SQL you expect.)
    B) You could load the following records into a seperate cube (let's call this SALES_CUBE_UPDATE, while your main cube is named SALES_CUBE).
    28-MAR-2010 PRODUCT_1 CUSTOMER_1 15.00
    28-MAR-2010 PRODUCT_3 CUSTOMER_2 -25.00
    As post load task, you can update the SALES cube to be the sum of the current value of SALES plus the value in the SALES_UPDATE cube. You would do this with OLAP DML code such as
    sales_cube_sales_stored(sales_cube_measure_dim 'SALES') = sales_cube_sales_stored + sales_cube_update_sales
    If you use this method, you would ideally:
    - Filter (in OLAP DML terms, LIMIT) the dimensions of the cube to only those values that have data in the SALES_CUBE_UPDATE cube so you don't spent time looping over dimension values that don't have data.
    - Loop over the composite dimension. E.g.,
    SET sales_cube_sales_stored(sales_cube_measure_dim 'SALES') = sales_cube_sales_stored + sales_cube_update_sales ACROSS sales_cube_composite
    Or, if the cube is partitioned (almost all cubes benefit from partitioning) you will loop the partition template. E.g.,
    SET sales_cube_sales_stored(sales_cube_measure_dim 'SALES') = sales_cube_sales_stored + sales_cube_update_sales ACROSS sales_cube_prt_template
    In most cases, you will do this only at the leaf levels and then aggregate so the entire process will look something like this:
    1) Load data into the sales_cube_update cube using the LOAD command (create this using a cube script in AWM). Don't bother to aggregate as part of the load.
    2) Run an OLAP DML program such as:
    " Limit to lowest levels
    LIMIT time TO time_levelrel 'DAY'
    LIMIT product TO product_levelrel 'ITEM'
    LIMIT customer TO customer_levelrel 'CUSTOMER'
    " Keep only those values where data exists in the SALES_CUBE_UPDATE cube.
    LIMIT time KEEP sales_cube_update_sales NE na
    LIMIT product KEEP sales_cube_update_sales NE na
    LIMIT customer KEEP sales_cube_update_sales NE na
    " Add the values of the sales_cube_update cube to the values in the sales_cube.
    " Loops the partition template for better performance.
    SET sales_cube_sales_stored(sales_cube_measure_dim 'SALES') = sales_cube_sales_stored + sales_cube_update_sales ACROSS sales_cube_prt_template
    " Save the data.
    UPDATE
    COMMIT
    3) Aggregate the cube (create an AGGREGATE command an AWM cube script).
    Notes:
    - Be sure to clear data from the SALES_CUBE_UPDATE cube before or after you load new data into it. (E.g., use the OLAP DML CLEAR command.)
    - If you will be running OLAP DML commands on data that exists in multiple partitions you can parallize the execution of the OLAP DML code. See the following post: http://oracleolap.blogspot.com/2010/03/parallel-execution-of-olap-dml.html
    Well, a bit of a lengthy explanation. I hope helps. Good look.

Maybe you are looking for

  • Mini display port to HDMI adapter not working

    I can not get my MacBook OS X to dispaly on my new Samsung LED TV. I have a mini display port to HDMI adapter (two years old) conneted with an HDMI cable (new) into the HDMI input on the TV. I tried both inputs, tried resetting the TV, plugging, unpl

  • Cannot Connect to other computers on my router

    I have a u.s.robotics sureconnect 9106 (adsl modem and router - the same as 9105) and wanted to hang another pc with xp pro to my xp pro machine. So i connect the two via ethernet cable to the router, but i am able to ping the other machine and to us

  • How do i view photos on itunes

    I have just sync'd all my pictures to my iphone but there are lots i dont want on it, how do i delete them?

  • Condition types MWSI and MWVO for TAXES

    Hi, I am trying to create the new condition types MWSI, MWVO and accounting keys SIV, SIO for new tax keys (TAXES) for the report RFIDESM340. I didnu2019t find any instruction about the parameters behind MWS and MWVO. Can someone advise me on this? T

  • RecordSet Count

    I am retrieving records from the database and need to populate them into a one dimensional Array. I need to get the count of the RecordSet and then set the size of the Array. Is there a method to know the number of rows retrieved into the RecordSet.