Dump Load Task - Important
Hi,
I need help ASAP.
Can anyone please provide information of how to use the DUMP LOAD TASK.
In my case, data exists in a SAP ERP and I'd like to export this to BPC, the suggestion for making this, is using this task because it will retrieve data from a SQL Table and move it to a SQL proper table in BPC.
The page HELP.SAP.COM provides information but its not clear for me, since it mentions a task called Import SQL, etc... please help !!! I have no idea how to make it work.
Velázquez
Hi Sorin,
Thanks for the response, it is very helpful !!
One more thing, in the beggining I thought this task retrieved data directy from a table and imported it directly in a BPC table (wb, fac2, fact), this without using a flat file... I came up this concept also because the "Import into SQL Server" task was mentioned to work with Dump & Load.
However, you say that I need a flat file (which has the data), a conversion file (to match data between BPC-R3) and the D&L task. This sounds a lot like using normal import packages... so, what is the difference?
At this moment, im thinking of getting the info from R3 in a flat file and use a "normal" import package for BPC.
If I'd like to directly work with data in SQL tables as mentioned before... which task/package is the best for me ? (not using flat files) Or is it always neccesary to use a flat file?
Thanx in advance, really appreciate your help.
Velázquez
Similar Messages
-
Multiple Convert/Dump and Load tasks in a single SSIS package?
I'd appreciate some input. We have a single data file that has about 1,000,000 rows of data spanning 28 time periods in the columns across each row. The file is too large to import at one time, and so we have created 9 different transformation files to pull subsets of the timeperiods using MVAL. The end user does not want to have to break apart the file and/or schedule 9 separate imports to bring this file in each time. So, I've attempted to create a single SSIS package that runs each of the 9 required component imports, but I'm running into the following error any time that I try to run more than one import at once:
An error occurred while running a packageException of type 'System.OutOfMemoryException' was thrown.
I've tried using a ForEach loop to simply loop through each of the transformation files and run an import for each transformation file it finds. But, I understand that SSIS doesn't release memory until the end of the ForEach loop, which would explain an out of memory issue. I then broke it out and specified 9 distinct Convert task and 9 distinct Dump and Load tasks in the package with an order of Convert 1 --> Dump and Load 1 --> Convert 2 --> Dump and Load 2, etc. This results in the same out of memory issue any time I enable it to run more than 1 convert/dump & load task in the package - they are not running simultaneously, but rather sequentially. I don't understand this. In looking at the logs, it looks like it's creating the temp files for the first import and adding them into BPC, and then creating the temp files for the second import but then fails prior to the second load. This will work for a smaller data file, but shouldn't the memory be released after the task and allow subsequent tasks to complete? Any recommendations on how to address this and automate this load?Quick update - I created unique ssis packages for the distinct imports and called those packages from within the initial package that is kicked off through Data Manager, and I ran into the same out of memory issue. Any ideas, suggestions, or help would be greatly appreciated.
Thanks!
Josh -
Data Loader - Only imports first record; remaining records fail
I'm trying to use Data Loader to import a group of opportunities. Everytime I run the Data Loader it only imports the first record. All the other records fail with the message "An unexpected error occurred during the import of the following row: 'External Unique Id: xxxxxxx'". After running the Data Loader, I can modify the Data file and remove the first record that was imported. By running the Data Loader again, the first row (previously the second row) will import successfully.
Any idea what could be causing this behavior?W need a LOT more information, starting with the OS, and the version of ID, including any applied patches.
Next we need to know if you are doing a single record per page or multiple records, whether the placeholders are on the master page, how many pages are in the document and if they all have fields on them (some screen captures might be useful -- embed them using the camera icon on the editing toolbar on the webpage rather than attaching, if it works [there seem to be some issues at the moment, though only for some people]).
What else is on the page? Are you really telling it to merge all the records, or just one?
You get the idea... Full description of what youhave, what you are doing, and what you get instead of what you expect. -
Which Backup server version is required for Sybase cumulative dump/load
Hello,
To perform cumulative dump and load (supported from ASE 15.7 SP100 and up) with 3rd party API, what backup server version is needed?
We use backup server API version 2.1, and sybackup.h does not have any definition for cumulative dump. This header file only defines DATABASE and TRANSACTION as command_type, so when "dump database foo cumulative" is issued, command_type becomes UNKNOWN.
Since we support all the ASE versions from 15.0.3 and up, we ideally need a backup server API that works for ASE versions from 15.0.3 and also works with cumulative backup. Is there such an API version?
Thanks,
AliWhat I'm told is that the version wasn't bumped because the changes shouldn't really affect the API functionality; the API module would still receive a byte stream and be able to store and return it regardless of the command_type.
However, you can easily modify your sybbackup.h to reflect the new command_types.
Replace the current command_type definitions with the following:
#define TRANSACTION 0 /* transaction log dump or load */
#define DATABASE 1 /* database dump or load */
#define CUMULATIVE 2 /* Cumulative dump to dump or load */
#define UNKNOWN 3 /* Pre-API SQL Server in use. Backup Server
has received a DUMP/LOAD command from a
SQL Server which is not passing the
'Command Type' toBackup Server. Therefore,
the Command type can not be determined. */
#define DELTA 4 /* Delta dump to dump or load */ -
Is it possible to load or import 1 journal from file?
Hi to all,
We are searching how to load or import from and excel file some journals into the journal table of SAP BPC. As we have seen it is possible to load all the table but not only to add one journal to the table.
Anybody knows if it is possible to load 1 journal from file into SAP BPC and add it to the table of journals?
in a standard way or using logic??
Thanks a lot!!!!I don't think that there is a method available to ONLY add 1 Journal record, since the sequence of Journals is an Autogenerated key. The best way would be to simply add the Journal, so that the appropriate values are logged and tracked accordingly. The table may be loaded, since the "whole" table has a defined key for the sequence of Journal entered transactions.
Hope this helps. -
Hi,
My problem is related to temp. control task from file. I have two heaters and LS336 temperature controller unit. I want a kind of temperature profile which should be loaded from the file. This will be very easy for me to control temperature and some other issues.
I added some 2 vi s. or one of them, arrray should be devided in to array. and the second one, I cannot read from the file correctly, why I couldn't get.
Thanks for helps and effords. Best.
Solved!
Go to Solution.
Attachments:
loading text file.vi 24 KB
load task from file.vi 31 KB
Untitled 1.txt 1 KBok, I guess I know what you want to do, I still do not understand how your VI could achieve that, though.
If you have LV2011, I attached an example that writes a file with some temperatures and reads it from the file afterwards.
If not, here is a picture of the whole code, so you can rewrite it:
note, its not fancy or something, but functional already. You can make a better one for your application.
I hope this solves your problem. Good luck with your experiment.
Attachments:
example read write array from file.vi 13 KB -
To read dump file without importing dump file
Hi All,
I want to check table data from dump file without import utility.
how can i open dump file in oracle 10g release 2 version.
Thanks in advance.user647572 wrote:
Hi All,
I want to check table data from dump file without import utility.
how can i open dump file in oracle 10g release 2 version.
Thanks in advance.You cannot open the oracle dumpfile . Though you can see it bu simply using the import utility .(actually data is not imported it show the sql only .)
In datapump , use parameter sqlfile=abc.sql
e.g; impdp scott/tiger@orcl directorory=dpump sqlfile-abc.sql dumpfile=xyz.dmp and logfile=xyz_log.log full=y
in case of original import use the parameter "show"
e.g; imp scott/tiger@orcl file=abc.dmp show=y log=abc_log.log full=y
hope this help u ..
--neeraj -
CE Bank Statement Load and Import Concurrent Request
Hi All,
I am trying to submit a concurrent request via PL/SQL for CE Bank Statement Loader but request_id always returns 0..
Being new to here i somehow cannot understand why so as I've tried this with GL's SQLLDR for the Journal Import.
Here's a part of my code:
fnd_global.APPS_INITIALIZE (1090,50563,260);
Commit;
reqid := FND_request.submit_request('CE','CESSQLLDR',null,null,FALSE,'LOAD',1000,'filename.dat','/dir/filedir/statementdir',3205,11066,null,null,null,null,null,'N',null,null);
Commit;
dbms_output.put_line(reqID);
Forgive me if this is such novice problem.
Thanks in advance!Hi,
Then I guess you've got an error in the parameters somewhere - is the concurrent program registered under the application you are passing in as the application short name? Have you got the program short name correct?
Regards,
Gareth -
Hi friends,
Now my problem in the sense, is creating during the loading of the module.....
In under Execution Plans tab-----> i clicked NEW and given name as HRMS
After that i enabled the check box FULL LOAD ALWAYS and under the SUBJECT AREAS tab i clicked the ADD/REMOVE button
Underneath it i selected all the sub modules of HRMS and i have added...
After that what i have did is, i setted the parameters...
Under the parameters tab i clicked GENERATE and i have given 1 for it.....and after that i setted parameters like
NAME------------------------------------VALUE
DBconnection_OLAP---------------Datawarehouse
DBconnection_OLTP---------------ORA_R1211
i have given in the above manner, and since i didnt use any excel/flat files source....
After that i gave SAVE button and then clicked BUILD button and after that i gave RUN NOW..
It showed the message that it is running successfully..
After i went to the CURRENT RUN tab and i gave refresh inorder to see the steps proceeding for every second....
it also showed that total no of tasks is 270....
But my problem is, the task is not getting completing successfully, in the sense what i meant is it is not at all running even a single task successfully and it is throwing the following error like
Some steps failed.Number of incomplete tasks whose status got updated to stopped :0
Number of incomplete task details whose status got updated to stopped :1606
I dont know why this kind of error is getting occuring for me, as i have done all the steps correctly....
Why it is not allowing me to complete the task of the module...
Help me friends, it is an urgent requirement...
Thanks in Advance,
All izz well
GTA...Hi,
When you click on generate parameters, it should load the logical and physical folders under the parameters tab. I see that this is missing in you parameters tab and this might be the reason for not picking up the mappings for execution.
If you have not defined logical and physical sources then you need to do that in setup and physical data sources tab.
Hope this helps...............
Thanks,
Navin Kumar Bolla -
Load or import custom style mapping in link options – is that possible somehow?
Hi Everyone,
I'm working with Instructions for Use and Technical Manual documents in InDesign. I'm constantly fine-tuning our templates, and now I have to refresh them again, because of a brand refresh. I'm using the Custom Style Mapping in the Link Options panel a lot, because it helps us to copy the text from older documents to the new ones with the right formatting, with less effort. The only thing I miss is the “Load/Import Custom Style Mapping” option from this panel.
Do you know if there’s any options to export/import/load these mappings somehow from one document to another? Is it possible to write a script for that? I find this part of InDesign quite unattached, it seems to me (based on my search) that no other people are using it. I feel lonely, help me here!
(I have created many new mappings in an older document, and I’d like to be able to use them in the new templates as well, but I can only do that if I map the styles again. And it’s quite time consuming. Maybe I'm just using too many paragraph styles, I have no idea – this could be another question: what is too much when it comes to styles...)
Thanks a lot,
ZitaSync is not intended to be used as a backup service like you are talking about, but it will work as long as you save the Recovery Key, as Jefferson mentioned (you also need your Sync Username ''[email address you used]'' and your account Password).
Mozilla has just started working a new "cloud service" that is code named '''PiCL''' for "Profile in CLoud", which I am expecting will be a lot easier to use and might allow the user to access their bookmarks and other data from the internet without adding a "borrowed" PC to your Sync account. -
Short dump TSV_TNEW_PAGE_ALLOC_FAILED when import SAPKB70016
Hi all,
I´m trying to import the support package SAPKB70016 im my QAS system and I got an error. The import stop on phase XPRA_EXECUTION and I saw at the tcode sm37 that there is a job running with the name RDDEXECL. This job is canceled with the dump TSV_TNEW_PAGE_ALLOC_FAILED. I already changed some parameters and also I applied some notes but I can´t solve this issue.
Parameter changed Before After
ztta/roll_area 30000000 100000000
ztta/roll_extension 4000317440 8000000000
abap/heap_area_dia 2000683008 4000683008
abap/heap_area_nondia 2000683008 4000683008
abap/heap_area_total 2000683008 4000683008
em/initial_size_MB 392 1024
abap/shared_objects_size_MB 20 150
es/implementation map std
JOB LOG:
Job started
Step 001 started (program RDDEXECL, variant , user ID DDIC)
All DB buffers of application server FQAS were synchronized
ABAP/4 processor: TSV_TNEW_PAGE_ALLOC_FAILED
Job cancelled
ST22 LOG:
Memory location: "Session memory"
Row width: 510
Number of rows: 0
Allocated rows: 21
Newly requested rows: 288 (in 9 blocks)
Last error logged in SAP kernel
Component............ "EM"
Place................ "SAP-Server FQAS_QAS_01 o
Version.............. 37
Error code........... 7
Error text........... "Warning: EM-Memory exhau
Description.......... " "
System call.......... " "
Module............... "emxx.c"
Line................. 1897
The error reported by the operating system is:
Error number..... " "
Error text....... " "
The amount of storage space (in bytes) filled at termination time was:
Roll area...................... 99661936
Extended memory (EM)........... 8287273056
Assigned memory (HEAP)......... 1376776176
Short area..................... " "
Paging area.................... 49152
Maximum address space.......... 18446743890583112895
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"TSV_TNEW_PAGE_ALLOC_FAILED" " "
"CL_ENH_UTILITIES_XSTRING======CP" or "CL_ENH_UTILITIES_XSTRING======CM008"
"GET_DATA"
Now, I don´t know what I can do to solve this problem.
Can you help me?
ThanksHi all,
Gagan, I already changed my parameters according to the above post. I increased this parameters until maximum allowed but the dump still persists.
Bhuban
In this server I have 16GB RAM and 600GB HD.
total used free shared buffers cached
Mem: 16414340 4973040 11441300 0 454436 3572592
-/+ buffers/cache: 946012 15468328
Swap: 20479968 0 20479968
Size Used Avail Use% Mounted on
441G 201G 218G 48% /oracle
20G 6.5G 12G 36% /sapmnt
25G 21G 2.7G 89% /usr/sap/trans
25G 8.8G 15G 39% /usr
20G 14G 5.1G 73% /
Anil, I already stop my appl and my db, I rebooted my OS too and after i tried again, no success.
What else can i do?
Thanks for all. -
Hierarchy loading in Import Manager
Hi,
I have the following scenario to load Hierarchy but Import Manager is throwing an error message ' Duplicate in Clomn-2".
However the same structure I am able to make in Data Manger manually for a 3 level hierarchy which confirm that hierarchy table supports that structure.
Column1 Column-2
(parent) (Child)
3M
3M Abrasive
Abrasive Roll
3M Coil
Coil Copper
GE Abrasive
SH-Ent Coil
The data shows Manufacturer 3M and GE both have 'Abrasive' which is very normal but not able to load thru 'create Hierarchy' option after selecting 2 flds in IM.
IM throws a duplicate error , "Duplicate in Column-2: Abrasive"
Hope the example communicates properly.
Appreciate ur help/tips
-reoGuys,
Found the way to do that.The steps to follow are
1) In partition have to select all mutilevel (for my case 3 levels) individual columns one by one and select all the flds brought to right pane to 'combine' them.
2)Select the combine field in partion, right click and select 'split partion' from context menu.
3)In map , map this partition field of source side to 'Name' fld of destination hierarchy table.
This works excellent
However I have a challenge to convert 220k hierarchy input record.The system is just hanging even if I have splitted the input files to 6/7 sections in MS access.
Any suggestion /tips appreciated.
-reo -
Tuple Loads using Import Manager
Running on 7.1 SP05 using the SAP supplied Vendor Master repostory archive.
Attempting to load company code data in the Vendors main table. Company Code Data is set-up as a tuple table in Console.
When defining the map in Import Manager it will only allow me to select Vendor Number as the matching field. And, as a result, it will not load all of the data in my load file - it loads the first value of Vendor Number and skips the other records where Vendor Number is duplicated. This makes sense; however, I need to be able to select both Vendor Number and Company Code as my matching fields when defining the map. These two fields are the unique identifier for the load file and all the data would be loaded correctly if I was able to select Company Code as well.
I read where you can set the tuple update option and I did this by adding Company Code, but for some reason, it is still only using Vendor Number as the matching field and not including Company Code with it.
Does anyone know how to get Company Code to appear in the "Mapped destination fields" so that I can match on both Vendor Number and Company Code? Or do other options exist where I could accurately load the tuple table?You match on the correct vendor in the Match Records tab. This positions you at the correct vendor.
To match on the correct Company Code you use the update tuple option (as you have done). The Company Code should be the only field that you use to match on the Company Code tuple. So, when the incoming record has the same Company Code the tuple will get updated. If the incoming record has a different Company Code, then you can set the options to either create a new tuple or skip the input record.
You can not match on the Company Code in the Match Records tab.
Edited by: Tom Belcher on Feb 18, 2011 10:13 AM -
Hierarchy Loads using Import Manager
Running on 7.1 SP05 using the SAP supplied Vendor Master repostory archive.
Attempting to load the Regions hierarchy table which contains three fields: Country (lookup flat), Region Code (text) and Region Name (text). When defining the map in Import Manager it will only allow me to select Region Name as the matching field. And, as a result, it will not load all of the data in my load file - it loads the first value of Region Name and skips the other records where Region Name is duplicated. This makes sense; however, I need to be able to select both Region Name and Country as my matching fields when defining the map. These two fields are the unique identifier for the load file and all the data would be loaded correctly if I was able to select Country as well. I have tried making Country both a unique field and a display field in Console, but to no avail.
Does anyone know how to get Country to appear in the "Mapped destination fields" so that I can match on both Region Name and Country?Hi Reinhardt,
I think you can achieve it by setting country field also as Display field (DF) along with existing Region name (DF) in Regions Hierarchy Table of MDM console.
Regards,
Mandeep Saini -
Map cannot be loaded in import manager | MDM
Hi ,
I am trying to upload some data in customer repository (I am on MDM 5.5 SP05) . I have a few XML files (IDOCS) from customer master in an ECC system ,which I am providing as a source for the import manager . In the source preview I can see the following tables
- DEBMAS06
- E1KNVIM
- E1KNVPM
Now when I try to open the import map 00_DEBMDM06_R3.map (standard map provided by SAP) it gives me an error message "the map cannot be loaded because table E1KNA1M is not a source" .
What do I need to get this file uploaded to the import manager ?
Regards
Deepak SinghHi Maheshwari/Harrison ,
I tried giving the type as XML schema as you guys suggested but now the the import manager won't open at all and I get the following error " The element DEBMAS06 is used but not declared in the DTD/Schema " .
In the console inside the customer repository > Admin > XML schemas i have made sure that the schema name DEBMDM06 points to DEBMDM.DEBMDM06.xsd that is provided in the business content with the customer repository . Infact for all the schemas( lookup tables , bank check tables , customer check tables ) i have made sure they point to the right XSD's provided in the business content for the customer repository .
Now what do you guys think is the problem ?
Regards
Deepak Singh
Maybe you are looking for
-
Hi - I downloaded DC Copy to use to transfer some photos from my PC to my I pad. The instructions on U tube by an Apple Tech person were not very clear. Can someone please give me a step by step procedure to followo to do this. Thanks
-
How To Get A Custom WebDAV To Run in Windows Server 2008 R2 with IIS 7.0
I am trying to create a WebDAV application using the below link as my starting point. http://sourceforge.net/projects/webdav/ I got it working just fine in IIS6 but can't seem to get it to work in IIS7. Does anybody know if there are any unique conf
-
Write Back - Change size of displayed textbox
Hi I'm using write back in OBIEE 10. As I know we can define the size of the input box in means that no one is able to add more text than allowed. But even if I allow 2000 characters the text box is shown as a single line input box. Is there a possib
-
Problem with downloading 2007 formats
I have a problem with downloading 2007 format files in my application. We can upload the 2007 formats without any problem. But, When we try to download them, we are receiving a 'Page cannot be displayed' error. When we add the corresponding MIME-TYPE
-
Help! Major "Security Certificate" Errors when browsing
The various Mac Forums I've tried have attempted to help (can post their solutions if needed), but none worked. My issue occurs on ALL of my home computers (MacBook and iMac using wi-fi) and ALL of my browsers (Safari, Firefox, Chrome). The problem: