Golden Gate - Initial Load using parallel process group
Dear all,
I am new to GG and I was wondering if GG can support initial load with parallel process groups? I have manage to do an initial load using "Direct BULK Load" and "File to Replicat", but I have several big tables and replicat is not catching up. I am aware that GG is not ideal for making initial load, but it is complicated to explain why I am using it.
Is it possible to user @RANGE function while performing Initial Load regardless of which method is used (file to replicat, direct bulk, ...) ?
Thanks in advance
you may use datapump for initial load for large tables.
Similar Messages
-
Golden Gate Initial Load - Performance Problem
Hello,
I'm using the fastest method of initial load. Direct Bulk Load with additional parameters:
BULKLOAD NOLOGGING PARALLEL SKIPALLINDEXES
Unfortunatelly the load of a big Table 734 billions rows (around 30 GB) takes about 7 hours. The same table loaded with normal INSERT Statement in parallel via DB-Link takes 1 hour 20 minutes.
Why does it take so long using Golden Gate? Am I missing something?
I've also noticed that the load time with and without PARALLEL parameter for BULKLOAD is almost the same.
Regards
PawelHi Bobby,
It's Extract / Replicat using SQL Loader.
Created with following commands
ADD EXTRACT initial-load_Extract, SOURCEISTABLE
ADD REPLICAT initial-load_Replicat, SPECIALRUN
The Extract parameter file:
USERIDALIAS {:GGEXTADM}
RMTHOST {:EXT_RMTHOST}, MGRPORT {:REP_MGR_PORT}
RMTTASK replicat, GROUP {:REP_INIT_NAME}_0
TABLE Schema.Table_name;
The Replicat parameter file:
REPLICAT {:REP_INIT_NAME}_0
SETENV (ORACLE_SID='{:REPLICAT_SID}')
USERIDALIAS {:GGREPADM}
BULKLOAD NOLOGGING NOPARALLEL SKIPALLINDEXES
ASSUMETARGETDEFS
MAP Schema.Table_name, TARGET Schema.Table_tgt_name,
COLMAP(USEDEFAULTS),
KEYCOLS(PKEY),
INSERTAPPEND;
Regards,
Pawel -
Initial Load R3AM1 Parallel Processing (load objects)
Dear
After optimizing the parallel processing for the initial load between ISU -> CRM (Initial Load ISU / CRM Issues), I would like to know the following:
On the CRM system , transaction R3AM1 you can monitor the Load Objects.Is it possible to parallel process the load objects?
At this time we can see each block being processed one at the time. We can however see there are multiple queues available.
Any info would be most welcome.
Kind regards
LucasThere you have it!
increase MAX_PARALLEL_PROCESSES(changing crm table smofparsfa) from 5 to 10 and you will see more queues getting processed simultaneously.
More info:
The maximum number of loads and requests that can be processed simultaneously is defined in the table SMOFPARSFA under the MAX_PARALLEL_PROCESSES..The hierarchy in order of preference is initial load, request load, and then SDIMA.
If the number of running loads and the number of running requests in total is as much as or higher than this number, the remaining loads have to wait until a process is free again.Consequently this parameter can be modified if the processing of loads\requests is delayed due to non availability of free processes.The default value of this parameter is 5(as is there in your CRM as well)
There's no need to change anything in table 'CRMPAROLTP' on the CRM system -
Golden Gate Initial load from 3 tb schema
Hi
My source database is 9i rdbms on solaris 5.10. I would like to build 11gR2 database on oracel Enterprise linux .
How can i do the initial load of 3tb size schema , from my source to target ( which is cross platform and different version of rdbms)
ThanksCouple of options.
Use old export/import to do the initial load. While that is taking place, turn on change capture on the source so any transactions that take place during exp/imp timeframe are captured in the trails. Once the init load is done, you start replicat with the trails that have accumulated since exp started. Once source and target are fully synchronized, do your cutover to the target system.
Do an in-place upgrade of your 9i source, to at least 10g. Reason: use transportable tablespaces (or, you can go with expdp/impdp). If you go the TTS route, you will also have to take into account endian/byte ordering of the datafiles (Solaris = big, Linux = little), and that will involve time to run RMAN convert. You can test this out ahead of time both ways. Plus, you can get to 10g on your source via TTS since you are on the same platform. When you do all of this for real, you'll also be starting change capture so trails can be applied to the target (not so much the case with TTS, but for sure with Data Pump). -
Group data locked error for MM01 using parallel processing
Hello gurus,
I am using Call txn method (MM01) Parallel Processing method ( around 9 threads ). The Materials are getting locked around 10 percent sometimes.
This is happening randomly ..one day i dont have any locking errors ..next day i have ...Any ideas why this could be..any prereq i need to check before executing the parallel processing..
Thank you in advance..
sasidhar pHi Sasidhar
I guess you are either extending the Sales Data or MRP Data. Just make sure that you are processing these transactions in a linear form for a single material. We can use parallel processing for different materials but for a single material if we go for parallel processing we can definetely expect the Lock Objects error.
Kind Regards
Eswar -
Using Parallel Processing for Collection worklist Generation
We are scheduling the program UDM_GEN_WORKLIST in Background mode with the below mentioned values in the variant
Collection Segment - USCOLL1
Program Control:
Worklist valid from - Current date
Ditribution Method - Even Distribution to Collection Specialists
Prallel Processing:
Number of jobs - 1
Package Size - 500.
Problem:
The worklist gets generated but it drops lot of customers items from the worklist when the program is schduled in background using above parameters.
Analysis:
- When I run the program UDM_GEN_WORKLIST in online mode all customers come through correctly on the worklist.
- When I simulate strategy with the missing customers it evaluates them high so there is nothing wrong with the strategy and evaluation.
- I increased the Pacakge size to its maximum size but still doesnt work.
- Nothing looks different in terms of Collection Profile on the BP master.
- There are always a fixed set of BP's that are missing fromt the worklist.
It looks like there is something that I dont know about the running these jobs correctly using the parallel processing parameters, any help or insight provided in this matter would be highly appreaciated.
Thanks,
Mehul.Hi Mehul,
I have a similar issue now; since a couple of days, the WL generation fails in background mode. Although when I'm running it in foreground processing it's completed w/o any problem.
My question is that would you confirm that you did reduce the package size to 1?
so, your parameters are: nr of jobs: 1 and package size: 1
Is that right? Did it completely solve your issue? -
Microsoft JScript compilation error alert on initial load using IE7
I have noticed that a javascript alert window is displayed whenever an Applet is initially launched using IE7.
This does not happen using IE8 (or FF, SF, Chrome, etc).
I thought it was our implementation, but the alert is also shown launching any Applet from the following;
http://java.sun.com/applets/jdk/
There are typically 2 alerts and after pressing OK, the Applets appear to run successfully.
The specific error is;
Alert title: Microsoft JScript compilation error.
Alert content: Expected ')' in regular expression.
I don't always remember this error using IE7.
Maybe it's related to the JRE?
I am currently using: (build 1.6.0_26-b03)
Any solution out there?879129 wrote:
..I thought it was our implementation, but the alert is also shown launching any Applet from the following;
http://java.sun.com/applets/jdk/
I just had a look at the source of two of the pages from the 1.4 section of that page (Animator (1) & ArcTest) & saw some very questionable HTML for the <tt>applet</tt> element, as well as two seemingly unrelated scripts. What do you get for (instead of those), the GIF Animator (the applet form)? It uses <tt>deployJava.js</tt>, which is about the only deployment option that Oracle cares about at the moment. -
How do you run a Process Chain using parrallel processing group
Does anyone know the answer to the above question please?
The problem i am getting is that currently the Process chain is running on the client server and is timing out!
is there a config setting or some other setting that is needed to stop it from timing out?Hi Jas Matharu,
Hope the following links will give u a clear idea about process chains and clear ur doubts.
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/events/sap-teched-03/using%20process%20chains%20in%20sap%20business%20information%20warehouse
Business Intelligence Old Forum (Read Only Archive)
http://help.sap.com/saphelp_nw2004s/helpdata/en/8f/c08b3baaa59649e10000000a11402f/frameset.htm
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/8da0cd90-0201-0010-2d9a-abab69f10045
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/19683495-0501-0010-4381-b31db6ece1e9
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/36693695-0501-0010-698a-a015c6aac9e1
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/9936e790-0201-0010-f185-89d0377639db
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/3507aa90-0201-0010-6891-d7df8c4722f7
https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/263de690-0201-0010-bc9f-b65b3e7ba11c
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
Siggi's weblogs for data load error and how to restart process chain
/people/siegfried.szameitat/blog/2005/07/28/data-load-errors--basic-checks
/people/siegfried.szameitat/blog/2006/02/26/restarting-processchains
****Assign Points If Helpful****
Regards,
Ravikanth. -
Parallel Processing : Unable to capture return results using RECIEVE
Hi,
I am using parallel processing in one of my program and it is working fine but I am not able to collect return results using RECIEVE statement.
I am using
CALL FUNCTION <FUNCTION MODULE NAME>
STARTING NEW TASK TASKNAME DESTINATION IN GROUP DEFAULT_GROUP
PERFORMING RETURN_INFO ON END OF TASK
and then in subroutine RETURN_INFO I am using RECEIVE statement.
My RFC is calling another BAPI and doing explicit commit as well.
Any pointer will be of great help.
Regards,
Deepak Bhalla
Message was edited by: Deepak Bhalla
I used the wait command after rfc call and it worked additionally I have used Message switch in Receive statement because RECIEVE statement was returing sy-subrc 2.Not sure what's going on here. Possibly a corrupt drive? Or the target drive is full?
Try running the imagex command manually from a F8 cmd window (in WinPE)
"\\OCS-MDT\CCBShare$\Tools\X64\imagex.exe" /capture /compress maximum C: "\\OCS-MDT\CCBShare$\Captures\CCB01-8_15_14.wim" "CCB01CDrive" /flags ENTERPRISE
Keith Garner - Principal Consultant [owner] -
http://DeploymentLive.com -
Allowing parallel processing of cube partitions using OWB mapping
Hi All,
Iam using an OWB mapping to load a MOLAP cube partitioned on TIME dimension. I configured the OWB mapping by checking the 'Allow parallel processing' option with the no.of parallel jobs to be 2. I then deployed the mapping.The data loaded using the mapping is spread across multiple partitions.
The server has 4 CPU's and 6 GB RAM.
But, when i kick off the mapping, i can see only one partition being processed at a time in the XML_LOAD_LOG.
If i process the same cube in AWM, using parallel processing, i can see that multiple partitions are processed.
Could you pls suggest if i missed any setting on OWB side.
Thanks
ChakriHi,
I have assigned the OLAP_DBA to the user under which the OWB map is running and the job started off.
But, it failed soon with the below error:
***Error Occured in __XML_MAIN_LOADER: Failed to Build(Refresh) XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace. In __XML_VAL_MEASMAPS: In __XML_VAL_MEASMAPS_VAR: Error Validating Measure Mappings. In __XML_FND_PRT_TO_LOAD: In __XML_SET_LOAD_STATUS: In ___XML_LOAD_TEMPPRG:
Here is the log :
Load ID Record ID AW Date Actual Time Message Time Message
3973 13 SYS.AWXML 12/1/2008 8:26 8:12:51 8:26:51 ***Error Occured in __XML_MAIN_LOADER: Failed to Build(Refresh) XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace. In __XML_VAL_MEASMAPS: In __XML_VAL_MEASMAPS_VAR: Error Validating Measure Mappings. In __XML_FND_PRT_TO_LOAD: In __XML_SET_LOAD_STATUS: In ___XML_LOAD_TEMPPRG:
3973 12 XPRO_OLAP_NON_AGG.OLAP_NON_AGG 12/1/2008 8:19 8:12:57 8:19:57 Attached AW XPRO_OLAP_NON_AGG.OLAP_NON_AGG in RW Mode.
3973 11 SYS.AWXML 12/1/2008 8:19 8:12:56 8:19:56 Started Build(Refresh) of XPRO_OLAP_NON_AGG.OLAP_NON_AGG Analytic Workspace.
3973 1 XPRO_OLAP_NON_AGG.OLAP_NON_AGG 12/1/2008 8:19 8:12:55 8:19:55 Job# AWXML$_3973 to Build(Refresh) Analytic Workspace XPRO_OLAP_NON_AGG.OLAP_NON_AGG Submitted to the Queue.
Iam using AWM (10.2.0.3 A with OLAP Patch A) and OWB (10.2.0.3).
Can anyone suggest why the job failed this time ?
Regards
Chakri -
Parallel processing in background using Job scheduling...
(Note: Please understand my question completely before redirecting me to parallel processing links in sdn. I hve gone through most of them.)
Hi ABAP Gurus,
I have read a bit till now about parallel processing. But I have a doubt.
I am working on data transfer of around 5 million accounting records from lagacy to R/3 using Batch input recording.
Now if these all records reside in one flat file and if I then process that flat file in my batch input program, I guess it will take days to do it. So my boss suggested
to use parallel processing in SAP.
Now, from the SDN threads, it seems that we have to create a Remote enabled function module for it and stuf....
But I have a different idea. I thought to dividE these 5 million records in 10 flat files instead of just one and then to run the Custom BDC program with 10 instances which will process 10 flat files in background using Job scheduling.
Can this be also called parallel processing ?
Please let me know if this sounds wise to you guys...
Regards,
Tushar.Thanks for your reply...
So what do you suggest how can I use Parallel procesisng for transferring 5 million records which is present in one flat file using custom BDC.?
I am posting my custom BDC code for million record transfer as follows (This code is for creation of material master using BDC.)
report ZMMI_MATERIAL_MASTER_TEST
no standard page heading line-size 255.
include bdcrecx1.
parameters: dataset(132) lower case default
'/tmp/testmatfile.txt'.
DO NOT CHANGE - the generated data section - DO NOT CHANGE ***
If it is nessesary to change the data section use the rules:
1.) Each definition of a field exists of two lines
2.) The first line shows exactly the comment
'* data element: ' followed with the data element
which describes the field.
If you don't have a data element use the
comment without a data element name
3.) The second line shows the fieldname of the
structure, the fieldname must consist of
a fieldname and optional the character '_' and
three numbers and the field length in brackets
4.) Each field must be type C.
Generated data section with specific formatting - DO NOT CHANGE ***
data: begin of record,
data element: MATNR
MATNR_001(018),
data element: MBRSH
MBRSH_002(001),
data element: MTART
MTART_003(004),
data element: XFELD
KZSEL_01_004(001),
data element: MAKTX
MAKTX_005(040),
data element: MEINS
MEINS_006(003),
data element: MATKL
MATKL_007(009),
data element: BISMT
BISMT_008(018),
data element: EXTWG
EXTWG_009(018),
data element: SPART
SPART_010(002),
data element: PRODH_D
PRDHA_011(018),
data element: MTPOS_MARA
MTPOS_MARA_012(004),
end of record.
data: lw_record(200).
End generated data section ***
data: begin of t_data occurs 0,
matnr(18),
mbrsh(1),
mtart(4),
maktx(40),
meins(3),
matkl(9),
bismt(18),
extwg(18),
spart(2),
prdha(18),
MTPOS_MARA(4),
end of t_data.
start-of-selection.
perform open_dataset using dataset.
perform open_group.
do.
*read dataset dataset into record.
read dataset dataset into lw_record.
if sy-subrc eq 0.
clear t_data.
split lw_record
at ','
into t_data-matnr
t_data-mbrsh
t_data-mtart
t_data-maktx
t_data-meins
t_data-matkl
t_data-bismt
t_data-extwg
t_data-spart
t_data-prdha
t_data-MTPOS_MARA.
append t_data.
else.
exit.
endif.
enddo.
loop at t_data.
*if sy-subrc <> 0. exit. endif.
perform bdc_dynpro using 'SAPLMGMM' '0060'.
perform bdc_field using 'BDC_CURSOR'
'RMMG1-MATNR'.
perform bdc_field using 'BDC_OKCODE'
'=AUSW'.
perform bdc_field using 'RMMG1-MATNR'
t_data-MATNR.
perform bdc_field using 'RMMG1-MBRSH'
t_data-MBRSH.
perform bdc_field using 'RMMG1-MTART'
t_data-MTART.
perform bdc_dynpro using 'SAPLMGMM' '0070'.
perform bdc_field using 'BDC_CURSOR'
'MSICHTAUSW-DYTXT(01)'.
perform bdc_field using 'BDC_OKCODE'
'=ENTR'.
perform bdc_field using 'MSICHTAUSW-KZSEL(01)'
'X'.
perform bdc_dynpro using 'SAPLMGMM' '4004'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'MAKT-MAKTX'
t_data-MAKTX.
perform bdc_field using 'BDC_CURSOR'
'MARA-PRDHA'.
perform bdc_field using 'MARA-MEINS'
t_data-MEINS.
perform bdc_field using 'MARA-MATKL'
t_data-MATKL.
perform bdc_field using 'MARA-BISMT'
t_data-BISMT.
perform bdc_field using 'MARA-EXTWG'
t_data-EXTWG.
perform bdc_field using 'MARA-SPART'
t_data-SPART.
perform bdc_field using 'MARA-PRDHA'
t_data-PRDHA.
perform bdc_field using 'MARA-MTPOS_MARA'
t_data-MTPOS_MARA.
perform bdc_dynpro using 'SAPLSPO1' '0300'.
perform bdc_field using 'BDC_OKCODE'
'=YES'.
perform bdc_transaction using 'MM01'.
endloop.
*enddo.
perform close_group.
perform close_dataset using dataset. -
Parallel processing using BLOCK step..
Hi,
I have used parallel processing with a BLOCK step. I have put a multiline container element. In the BLOCK step, I have visibily to another container element generated because of the block step (multilne container_LINE). Thus the number of parallel processes are getting created as in the requirement, but the problem is the value in multilne container_LINE is not getting passed to the send mail step. I have checked the binding, everything is ok. Please
Sukumar.Hi
When I am sure that I am doing properly a binding but it doesn't work then:
1. I activate workflow template definition (take a joke).
2. I write the magic word
/$sync
in the command line, to refresh buffers.
3. I delete the strange binding defined with drag&drop and define it one more time using old method from former binding editors in R/3 4.6c systems. I take container elements from the lists of possible entries or I write their name directly. I don't use drag&drop.
Regards
Mikolaj
There are no problems, just "issues" and "improvement opportunities". -
Migrating db using golden gate
Hi,
I heared about a word that is golden gate. which is used for migration.
how can migrate db using golden gate..? and where it has to install. is there any other features of this.
please let me know
thank youHi;
Please see:
Migration of an Oracle Database Across OS Platforms [ID 733205.1]
http://www.databasejournal.com/sqletc/article.php/3780766/Successful-Database-Migration.htm
Regard
Helios -
Use of parallel processing profiles with SNP background planning
I am using APO V5.1.
In SNP background planning jobs I am noticing different planning results depending on whether I use a parallel processing profile or not.
For example if I use a profile with 4 parallel processes, and I run a network heuristic to process 5 location products I get an incomplete planning answer.
Is this expected behaviour? What are the 'good practices' for using these profiles?
Any advise appreciated...Hello,
I don't think using parallel processing profile is a good idea when you run network heuristic, since in network heuristic, the squence of the location/product is quite important. The sequence is determined by low-level code, as you may already know.
For example, in case of external procurement, it must first plan the distribution center then the supplying plant, and in case of inhouse production, it must first plan the final product then the components.
If you use parallel processing, the data set which is sorted by low-level code would be divided into several blocks and be processed at the same time. This may mess the planning sequence. For example, before the final product is planned in one block, the component is already planned in another block. When it plans the final product, new requirement of the component is generated, but the component will not be planned again, which results supply shortage of the component .
If there're many location products, maybe dividing the data set manually is a good practice. You can put those related location products in one job, and set several background jobs to plan different data set.
Best Regards,
Ada -
Parallel Processing framework using package BANK_PP_JOBCTRL
Hi All,
I am analyzing differnt parallel processing techniques available in SAP and need some input on Parallel Processing framework using package BANK_PP_JOBCTRL.
Can someone please let me know if you have any documentation available with you on this framework.
I have couple of questions on this framework as mentioned below.
1) This framewrok was develped as part of SAP Banking soltion. So is it possible to leverage it for other modules in SAP since now it is part of SAP_ABA component.
2) What are the benfits of it over other technique like asynchronous Remote function call (aRFC).
Any inputs on this will be of great help since there is very less documentation available on this topic on net.
Regards/Ajay DhyaniHello,
Apologies, never saw this thread and your query and i already worked it out myself during the time i posted it . If you are still interested here are some of the inputs for you.
With in package bank_pp_jobctrl , you will find these FM. I have mentioned the use of it as well.
RBANK_PP_DEMO_GENERATE_DATA: To create the Business data for Parallel Processing.
RBANK_PP_DEMO_CREATE_PACKMAN: To create Packages out of the business data.
RBANK_PP_DEMO_START : To process data in parallel.
RBANK_PP_DEMO_RESTART: To re-process failed records during parallel Processing.
You will need to call above in your report program in the same sequence as shown above based on you requirement. I did used only first three.
TO generate events you will need to execute SE38: RBANK_PP_GENERATE_APPL to create application this will create the FM with numbers as shown below.
Events: This PPF automatically triggers various events during the execution of the Start Program. Each of this event is associated with a custom function module which contains the business logic.
For implementing this framework, at least the below mentioned methods should be implemented .
0205 – Create Package Templates : This method is used to write the logic for creating packages which in turn decides the data to be processed in parallel. This function module is called in loop at the loop ends only when the exporting parameter E_FLG_NO_PACKAGE has a value ‘X’ passed back to the Parallel processing framework.
1000 – Initialize Package :This method is the first step in processing a package. It fetches all the parameters required for the parallel processing to start. All the parameters are passed to this FM as importing parameters and it is the responsibility of this FM to save it in global parameters so that it can be utilized by Parallel processing framework.
1100 – Selection per Range : This method is used to read data for a package. The objects selected must be buffered in global data areas of the application for later processing. The package information is stored as interval in global parameters and this information is used to select the package specific data.
1200 – Selection for Known Object List: This method is used instead of method 1100 if it is a restart run. The objects to be processed are known already.
1300 – Edit Objects: The processing logic to be implemented using parallel processing for the selected objects is written in this method. This function module is used to implement the business logic and
Also, obiviously you would like to log your messages , so the framwrok provides macros to do it.
Let me know if you need some further help as I know there is very little information provided on this.
Regards/Ajay
Maybe you are looking for
-
How do I delete files from the cloud
I have numerous back issues of The New Yorker as well as old videos that are automatically stored on icloud from my ipad. I want to delete them from the cloud to free up storage space. How do I do this?
-
TextInputCallback changing form field
I have developed a custom security provider which works! I can get logged in AND using the TextInputCallback handler I can set form fields to access when coming in from the login page. However I can't seem to change or retrieve those values on the ne
-
HT4009 report problem for fake purchasing
i recently found a few apps that i didnt purchased on apps store... need help
-
what are non cumulative key figures?
-
Is there a phone app? It doesn't show on the sample screen. Looks like we'll have to carry both devices, making it a larger version of the iphone apps.