ABAP Data flow
Hi
can we replicate ABAP data flow and do modifications for history data upload?
It is a copy and whatever changes you will make is not going to impact the other ABAP dataflows.
Similar Messages
-
DS 4.2 get ECC CDHDR deltas in ABAP data flow using last run log table
I have a DS 4.2 batch job where I'm trying to get ECC CDHDR deltas inside an ABAP data flow. My SQL Server log table has an ECC CDHDR last_run_date_time (e.g. '6/6/2014 10:10:00') where I select it at the start of the DS 4.2 batch job run and then update it to the last run date/time at the end of the DS 4.2 batch job run.
The problem is that CDHDR has the date (UDATE) and time (UTIME) in separate fields and inside an ABAP data flow there are limited DS functions. For example, outside of the ABAP data flow I could use the DS function concat_date_time for UDATE and UTIME so that I could have a where clause of 'concat
_date_time(UDATE, UTIME) > last_run_date_time and concat_date_time(UDATE, UTIME) <= current_run_date_time'. However, inside the ABAP data flow the DS function concat_date_time is not available. Is there some way to concatenate UDATE + UTIME inside an ABAP data flow?
Any help is appreciated.
Thanks,
BradMichael,
I'm trying to concatenate date and time and here's my ABAP data flow where clause:
CDHDR.OBJECTCLAS in ('DEBI', 'KRED', 'MATERIAL')
and ((CDHDR.UDATE || ' ' || CDHDR.UTIME) > $CDHDR_Last_Run_Date_Time)
and ((CDHDR.UDATE || ' ' || CDHDR.UTIME) <= $Run_Date_Time)
Here are DS print statements showing my global variable values:
$Run_Date_Time is 2014.06.09 14:14:35
$CDHDR_Last_Run_Date_Time is 1900.01.01 00:00:01
The issue is I just created a CDHDR record with a UDATE of '06/09/2014' and UTIME of '10:48:27' and it's not being pulled in the ABAP data flow. Here's selected contents of the generated ABAP file (*.aba):
PARAMETER $PARAM1 TYPE D.
PARAMETER $PARAM2 TYPE D.
concatenate CDHDR-UDATE ' ' into ALTMP1.
concatenate ALTMP1 CDHDR-UTIME into ALTMP2.
concatenate CDHDR-UDATE ' ' into ALTMP3.
concatenate ALTMP3 CDHDR-UTIME into ALTMP4.
IF ( ( ALTMP4 <= $PARAM2 )
AND ( ALTMP2 > $PARAM1 ) ).
So $PARAM1 corresponds to $CDHDR_Last_Run_Date_Time ('1900.01.01 00:00:01') and $PARAM2 corresponds to $Run_Date_Time ('2014.06.09 14:14:35'). But from my understanding ABAP data type D is for date only (YYYYMMDD) and doesn't include time, so is my time somehow being defaulted to '00:00:00' when it gets to DS? I ask this as a CDHDR record I created on 6/6 wasn't pulled during my 6/6 testing but this 6/6 CDHDR record was pulled today.
I can get last_run_date_time and current_run_date_time into separate date and time fields but I'm not sure how to build the where clause using separate date and time fields. Do you have any recommendations or is there a better way for me to pull CDHDR deltas in an ABAP data flow using something different than a last run log table?
Thanks,
Brad -
Using ABAP DATA FLOW to pull data from APO tables
I am trying to use an ABAP Data flow to pull data from APO and receive error 150301. I can do a direct table pull and receive no error, but when I try to put it in an ABAP data data flow I get the issue. Any help would be great.
Hi
I know you "closed" this, however someone else might read it so I'll add that when you use an ABAP dataflow, logic can be pushed to ECC - table joins, filters, etc. (Which can be seen in the generated ABAP).
Michael -
Hi All,
I have started to run a simple abap data flow in BODS so the data flow has been designed and it is executed then the below error is thrown.
It seems to be some abap drivers issues , not the design flow.
so anyone please suggest the actual issue here and let me know you need any further information.
Thanks.
Best Regards,
EduHi Konakanchi,
Please share the Job execution log for more details like RFC Configuration issue or BODS Issue, Mean while can you please check the RFC Connection Test through SAPGUI.
Thanks,
Daya -
Creating abap data flow, open file error
hello experts,
i am trying to pull all the field of MARA table in BODS.
so i m using abap data flow.but after executing the job i got error "cant open the .dat file"
i am new to abap data flow so i think may be i did some mistake in configuration of datastore.
can any one guide me how to create a datastore for abap data flow???In your SAP Applications datastore, are you using "Shared Directory" or "FTP" as the "Data transfer method"? Given the error, probably the former. In that case, the account used by the Data Services job server must have access to wherever SAP is putting the .DAT files. When you run an ABAP dataflow, SAP runs the ABAP extraction code (of course) and then exports or saves the results to a .DAT file, which I believe is just a tab-delimited flat text file, in the folder "Working directory on SAP server." This is specified from the perspective of the SAP server, e.g., "E:\BODS\TX," where the E:\BODS\TX folder is local to the SAP application server. I believe this folder is specified as a directive to the ABAP code, telling SAP where to stick it (the .DAT files). The DS job server then picks it up from there, and you tell it how to get there via "Application path to the shared directory," which, in the above case, might be
SAPDEV1\BODS\TX" if you shared-out the E:\BODS folder as "BODS" and the SAP server was SAPDEV1. Anyway: the DS job server needs to be able to read files at
SAPDEV1\BODS\TX, and may not have any rights to do so, especially if it's just logging-in as Local System. That's likely your problem. In a Windows networking environment, I always have the DS job server log-in using an AD account, which then needs to be granted privileges to the, in our example's case,
SAPDEV1\BODS\TX folder. Also comes in handy for getting to data sources, sometimes.
Best wishes,
Jeff Prenevost
Data Services Practice Manager
itelligence -
ABAP Data Flows - Parallel Execution?
Hi Guys,
If I have a Data Flow that includes within it multiple, lets say 3, ABAP Data Flows; I see that when the job is started, only 1 ABAP flow at a a time will run even though there is no precedence enforced and they could all kick off together.
Is there a way to get these ABAP flows to all run in parallel in SAP?
Thanks,
Flip.Hi Flip,
Never actually tried to do this but I see that in the Performance Optimization guide they only specify that Dataflows and Workflows can be processed in parallel, so I would suppose then that if you placed each ABAP dataflow into its own separate dataflow and then encapsulated the 3 unlinked dataflows into a workflow then the ABAP dataflows should execute in parallel, since they are initiated by the dataflows.
Clint. -
Data Service 4.2 upgrade issue - R/3 abap data flow error
This error makes sense if you get it in PROD environment. But any idea if this can occur if we run against ECC-DEV environment.
I don't think it makes sense to use execute preloaded option against DEV
Steps performed for connecting to ECC through DS 4.2:
1. Basis Imported the new functions into ECC which we got after raising an OSS with them.
2. Gave the authorizations as per the manual.
● S_BTCH_JOB ● S_DEVELOP ● S_RFC ● S_TABU_DIS ● S_TCODE
3. Ran a simple R3 Data flow(Shared Directory transfer method) which resulted in error RFC_ABAP_INSTALL_AND_RUN:RFC_ABAP_MESSAGE, changes to repository object are not permitted in the client.
Do we need more permissions than listed above to avoid this error??Hello,
I run 'R3trans -x' command, but there was no problem - connection to database was working.
Problem was following:
Before starting the sdt service on host, I set environment variables JAVA_HOME and LD_LIBRARY_PATH for sidadm. That's not neccessary and that was the problem. Without setting these variables it is working now.
Thanks,
Julia -
Display Data Flow - Short Dump
Hi all,
When i select display data flow of any cube...it is going for a short dump.
I have searched for the answer in previous Forum questions. I could find only for previous BW versions but not for for BI7.
Could you please let me know the solution for this issue.
Thanks & Regards,
EswariHi All,
Thank you very much for all of your responces.....
I am working on Support Package 10.
Here is the detailed description of the short dump.
Short text
The current application triggered a termination with a short dump.
What happened?
The current application program detected a situation which really
should not occur. Therefore, a termination with a short dump was
triggered on purpose by the key word MESSAGE (type X).
What can you do?
Note down which actions and inputs caused the error.
To process the problem further, contact you SAP system
administrator.
Using Transaction ST22 for ABAP Dump Analysis, you can look
at and manage termination messages, and you can also
keep them for a long time.
Error analysis
Short text of error message:
GP: Control Framework returned an error; contact system administrator
Long text of error message:
Diagnosis
The Graphical Framework is based on the basis technology known as
the Control Framework. A method in the Control Framework returned
an error.
Procedure
It probably involves a programming error. You should contact your
system administrator.
Procedure for System Administration
Check the programming of the graphics proxy especially for the
parameters that were sent and, if necessary, correct your program.
Technical information about the message:
Message class....... "APPLG"
Number.............. 229
Variable 1.......... " "
Variable 2.......... " "
Variable 3.......... " "
Variable 4.......... " "
How to correct the error
Probably the only way to eliminate the error is to correct the program.
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"MESSAGE_TYPE_X" " "
"CL_AWB_OBJECT_NET_SAPGUI======CP" or "CL_AWB_OBJECT_NET_SAPGUI======CM005"
"PBO"
If you cannot solve the problem yourself and want to send an error
notification to SAP, include the following information:
1. The description of the current problem (short dump)
To save the description, choose "System->List->Save->Local File
(Unconverted)".
2. Corresponding system log
Display the system log by calling transaction SM21.
Restrict the time interval to 10 minutes before and five minutes
after the short dump. Then choose "System->List->Save->Local File
(Unconverted)".
3. If the problem occurs in a problem of your own or a modified SAP
program: The source code of the program
In the editor, choose "Utilities->More
Utilities->Upload/Download->Download".
4. Details about the conditions under which the error occurred or which
actions and input led to the error.
Thanks,
Eswari. -
R/3 data flow is timing out in Data Services
I have created an R/3 data flow to pull some AP data in from SAP into Data Services. This data flow outputs to a query object to select columns and then outputs to a table in the repository. However the connection to SAP is not working correctly. When I try to process the data flow it just idles for an hour until the SAP timeout throws an error. Here is the error:
R/3 CallReceive error <Function Z_AW_RFC_ABAP_INSTALL_AND_RUN: connection closed without message (CM_NO_DATA_RECEIVED)
I have tested authorizations by adding SAP_ALL to the service account I'm using and the problem persists.
Also, the transports have all been loaded correctly.
My thought is that it is related to the setting that controls the method of generating and executing the ABAP code for the data flow, but I can't find any good documentation that describes this, and my trial and error method so far has not produced results.
Any help is greatly appreciated.
Thanks,
MattYou can't find any good documentation??? I am working my butt off just.......just kiddin'
I'd suggest we divide the question into two parts:
My dataflow takes a very long time, how can I prevent the timeout after an hour? Answer:
Edit the datastore, there is a flag called "execute in background" to be enabled. With that the abap is submitted as a background spool job, hence does not have the dialog-mode timeout. Another advantage is, you can watch it running by brwosing the spool jobs from the SAP GUI.
The other question seems to be, why does it take that long even? Answer:
Either the ABAP takes that long because of the data volume.
Or the ABAP is not performing well, e.g. join via ABAP loops with the wrong table as inner.
Another typical reason is to use direct_download as transfer method. This is fine for testing but it takes a very long time to download data via the GUI_DOWNLOAD ABAP function. And the download time would be part of the ABAP execution.
So my first set of questions would be
a) How complex is the dataflow, is it just source - query - data_transfer or are there joins, lookups etc?
b) What is the volume of the table(s)?
c) What is your transfer method?
d) Have you had a look at the generated abap? (in the R/3 dataflow open the menu Validation -> Generate ABAP)
btw, some docs: https://wiki.sdn.sap.com:443/wiki/display/BOBJ/ConnectingtoSAP -
Hello People,
We are facing a abnormal behavior with the dataflows in the data services job.
Scenario:
We are extracting the data from CRM end in parallel. Please refer the build:
a. We have 5 main workflows flows i.e :
=> Main WF1 has 6 more sub Wf's in it, in which each sub Wf has 1/2 DF's associated in parallel.
=> Main WF2 has 21 DF's and 1 WFa->with a DF & a WFb. WFb has 1 DF in parallel.
=> Main WF3 has 1 DF in parallel.
=> Main WF4 has 3 DF in parallel.
=> Main WF5 has 1 WF & a DF in sequence.
b. Regularly the job works perfectly fine but, sometimes it gets stuck at the DF’s without any error logs.
c. Job doesn’t stuck at a specific dataflow or on a specific day, many a times it strucks at different DF’s.
d. Observations in the Monitor Log:
Dataflow---------------------- State----------------RowCnt------LT-------AT------
+DF1/ZABAPDF
PROCEED
234000
8.113 394.164
/DF1/Query
PROCEED
234000
8.159 394.242
-DF1/Query_2
PROCEED
234000
8.159 394.242
Where LT: Lapse Time and AT: Absolute time
If you check the monitor log, the State of the Dataflow DF1 remains PROCEED till the end, ideally it should complete.
In successful jobs, the status for DF1 is STOP . This DF takes approx. 2 min to execute.
The row count for DF1 extraction is 234204 but, it got stuck at 234000.
Then we terminate the job after sometime,but for surprise it gets executed successfully on next day.
e. As per analysis over all the failed jobs, same things were observed over the different data flows that got stuck during the execution.Logic related to the data flows is perfectly fine.
Observations in the Trace log:
DATAFLOW: Process to execute data flow <DF1> is started.
DATAFLOW: Data flow <DF1> is started.
ABAP: ABAP flow <ZABAPDF> is started.
ABAP: ABAP flow <ZABAPDF> is completed.
Cache statistics determined that data flow <DF1>
uses <0>caches with a total size of <0> bytes. This is less than(or equal to) the virtual memory <1609564160> bytes available for caches.
Statistics is switching the cache type to IN MEMORY.
DATAFLOW: Data flow <DF1> using IN MEMORY Cache.
DATAFLOW: <DF1> is completed successfully.
The highlighted text in the trace log is not appearing in the unsuccessful job but, it appears for the successful one.
Note: The cache type is pageable cache, DS ver is 3.2.
Please suggest.
Regards,
SantoshHi Santosh,
just a wild guess.
Would you be able to replicate all the DF\WF , delete original DF\WF, rename replicated objects to original to DF\WF names(for your convenience) and excute it.
Some time reference does not work.
Hope this should work.
Regards,
Shiva Sahu -
Hi all,
I am working on the Accounts Payable Rapid mart . Can i have a job that first creates all the .dat files on the SAP working directory and another job that executes the .dat file from the application shared directory without having to again run the R3 data flow
If you didnt get it ..
1st job get the data from the sap r3 table and puts in the data transport ( ie..It writes the .dat file on the working directory of the SAP server).
2nd job gets the .dat file from the application shared directory without having to do the first job again
Is the above method possible if there is a way.
I would really appreciate any comments or explanations on it.
Thanks
OJImagine the following case:
You execute your regular job.
It starts a first dataflow
A first ABAP is started...runs for a while...then is finished.
Now the system knows there is a datafile on the SAP server and wants to get it
Because we configured the datastore to use a custom transfer program as download, the tool expects our bat file to download the file from the SAP server to the DI server
Our custom transfer program shall do nothing else than wait for 15 minutes because we know the file will be copied without our intervention automatically. So we wait and after 15 minutes we return with "success"
DI then assumes the file is copied and starts reading it from the local directory...
The entire trick is do use the custom transfer batch script as a way to wait for the file to be transported automatically. In the real implementation the batch script will not wait but check if the file is finally available....something along those lines.
So one job execution only, no manual intervention.
Got it? Will it work? -
Hi,
I created online sale order interactive form using ABAP. In web dynpro component I maintained Display type as native. In the form Layout type as ZCI layout. In form layout I drag and drop the Value help button and Submit button form web dynpro native tab. Now buttons are working but data flow is not happening so I am not able to create sales order.
But if I use Display type as ActiveX ,form Layout as Standard, Buttons From Webdynpro activeX. In this case f4 help is not working but data flow was happening so I am able create sales order. I need f4 help and as well as I should able to create sales order.
So please help me.
Thanks & Regards,
Krishna,Hi Mohan,
For ZCI Layout in the Designer have you inserted the WebDynpro Script. If not goto Layout Designer and in the WebDynpro Tool Bar goto Utilities->(select) Insert WebDynpro Script.
To check whether the Script is inserted or not, goto Palette->Heirarchy of Adobe Form Toolbar and in Heirarchy scroll down to Variables, in the variables you'll find one script object i.e. "containerFoundation_JS". If this is present then it will work.
If it is not inserted then use the Report FP_ZCI_UPDATE.
Regards
Pradeep Goli -
COMM_SRTUCTURE is uknown when migrating data flow bw 3.x to 7.4
Dear ALL,
2LIS_13_VDHDR data flow migrating 3.x to 7.x , ABAP syntax error COMM_STRUCTURE is unknown infosoure transformation level, present we are using 7.4 sp5 . after migration ABAP code
TYPES:
BEGIN OF _ty_s_TG_1_full,
* InfoObject: 0CHNGID Change Run ID.
CHNGID TYPE /BI0/OICHNGID,
* InfoObject: 0RECORDTP Record type.
RECORDTP TYPE /BI0/OIRECORDTP,
* InfoObject: 0REQUID Request ID.
REQUID TYPE /BI0/OIREQUID,
* InfoObject: 0CALDAY Calendar Day.
CALDAY TYPE /BI0/OICALDAY,
* InfoObject: 0CALMONTH Calendar Year/Month.
CALMONTH TYPE /BI0/OICALMONTH,
* InfoObject: 0CALWEEK Calendar year / week.
CALWEEK TYPE /BI0/OICALWEEK,
* InfoObject: 0FISCPER Fiscal year / period.
FISCPER TYPE /BI0/OIFISCPER,
* InfoObject: 0FISCVARNT Fiscal year variant.
FISCVARNT TYPE /BI0/OIFISCVARNT,
* InfoObject: 0BILLTOPRTY Bill-to party.
BILLTOPRTY TYPE /BI0/OIBILLTOPRTY,
* InfoObject: 0COMP_CODE Company code.
COMP_CODE TYPE /BI0/OICOMP_CODE,
* InfoObject: 0DISTR_CHAN Distribution Channel.
DISTR_CHAN TYPE /BI0/OIDISTR_CHAN,
* InfoObject: 0DOC_CATEG Sales Document Category.
DOC_CATEG TYPE /BI0/OIDOC_CATEG,
* InfoObject: 0PLANT Plant.
PLANT TYPE /BI0/OIPLANT,
* InfoObject: 0SALESORG Sales Organization.
SALESORG TYPE /BI0/OISALESORG,
* InfoObject: 0SALES_GRP Sales group.
SALES_GRP TYPE /BI0/OISALES_GRP,
* InfoObject: 0SALES_OFF Sales Office.
SALES_OFF TYPE /BI0/OISALES_OFF,
* InfoObject: 0SHIP_TO Ship-To Party.
SHIP_TO TYPE /BI0/OISHIP_TO,
* InfoObject: 0SOLD_TO Sold-to party.
SOLD_TO TYPE /BI0/OISOLD_TO,
* InfoObject: 0VERSION Version.
VERSION TYPE /BI0/OIVERSION,
* InfoObject: 0VTYPE Value Type for Reporting.
VTYPE TYPE /BI0/OIVTYPE,
* InfoObject: 0DIVISION Division.
DIVISION TYPE /BI0/OIDIVISION,
* InfoObject: 0MATERIAL Material.
MATERIAL TYPE /BI0/OIMATERIAL,
* InfoObject: 0SHIP_POINT Shipping point.
SHIP_POINT TYPE /BI0/OISHIP_POINT,
* InfoObject: 0PAYER Payer.
PAYER TYPE /BI0/OIPAYER,
* InfoObject: 0DOC_CLASS Document category /Quotation/Order/Deliver
*y/Invoice.
DOC_CLASS TYPE /BI0/OIDOC_CLASS,
* InfoObject: 0DEB_CRED Credit/debit posting (C/D).
DEB_CRED TYPE /BI0/OIDEB_CRED,
* InfoObject: 0SALESEMPLY Sales Representative.
SALESEMPLY TYPE /BI0/OISALESEMPLY,
* InfoObject: 0SUBTOT_1S Subtotal 1 from pricing proced. for condit
*ion in stat. curr..
SUBTOT_1S TYPE /BI0/OISUBTOT_1S,
* InfoObject: 0SUBTOT_2S Subtotal 2 from pricing proced. for condit
*ion in stat. curr..
SUBTOT_2S TYPE /BI0/OISUBTOT_2S,
* InfoObject: 0SUBTOT_3S Subtotal 3 from pricing proced.for conditi
*on in stat. curr..
SUBTOT_3S TYPE /BI0/OISUBTOT_3S,
* InfoObject: 0SUBTOT_4S Subtotal 4 from pricing proced. for condit
*ion in stat. curr..
SUBTOT_4S TYPE /BI0/OISUBTOT_4S,
* InfoObject: 0SUBTOT_5S Subtotal 5 from pricing proced. for condit
*ion in stat. curr..
SUBTOT_5S TYPE /BI0/OISUBTOT_5S,
* InfoObject: 0SUBTOT_6S Subtotal 6 from pricing proced. for condit
*ion in stat. curr..
SUBTOT_6S TYPE /BI0/OISUBTOT_6S,
* InfoObject: 0OPORDQTYBM Open orders quantity in base unit of meas
*ure.
OPORDQTYBM TYPE /BI0/OIOPORDQTYBM,
* InfoObject: 0OPORDVALSC Net value of open orders in statistics cu
*rrency.
OPORDVALSC TYPE /BI0/OIOPORDVALSC,
* InfoObject: 0QUANT_B Quantity in base units of measure.
QUANT_B TYPE /BI0/OIQUANT_B,
* InfoObject: 0DOCUMENTS No. of docs.
DOCUMENTS TYPE /BI0/OIDOCUMENTS,
* InfoObject: 0DOC_ITEMS Number of Document Items.
DOC_ITEMS TYPE /BI0/OIDOC_ITEMS,
* InfoObject: 0NET_VAL_S Net value in statistics currency.
NET_VAL_S TYPE /BI0/OINET_VAL_S,
* InfoObject: 0COST_VAL_S Cost in statistics currency.
COST_VAL_S TYPE /BI0/OICOST_VAL_S,
* InfoObject: 0GR_WT_KG Gross weight in kilograms.
GR_WT_KG TYPE /BI0/OIGR_WT_KG,
* InfoObject: 0NT_WT_KG Net weight in kilograms.
NT_WT_KG TYPE /BI0/OINT_WT_KG,
* InfoObject: 0VOLUME_CDM Volume in cubic decimeters.
VOLUME_CDM TYPE /BI0/OIVOLUME_CDM,
* InfoObject: 0HDCNT_LAST Number of Employees.
HDCNT_LAST TYPE /BI0/OIHDCNT_LAST,
* InfoObject: 0CRM_PROD Product.
CRM_PROD TYPE /BI0/OICRM_PROD,
* InfoObject: 0CP_CATEG Category.
CP_CATEG TYPE /BI0/OICP_CATEG,
* InfoObject: 0FISCYEAR Fiscal year.
FISCYEAR TYPE /BI0/OIFISCYEAR,
* InfoObject: 0BP_GRP BP: Business Partner Group (from Hierarchy).
BP_GRP TYPE /BI0/OIBP_GRP,
* InfoObject: 0STAT_CURR Statistics Currency.
STAT_CURR TYPE /BI0/OISTAT_CURR,
* InfoObject: 0BASE_UOM Base Unit of Measure.
BASE_UOM TYPE /BI0/OIBASE_UOM,
* InfoObject: 0PROD_CATEG Product Category.
PROD_CATEG TYPE /BI0/OIPROD_CATEG,
* InfoObject: 0VOLUME Volume.
VOLUME TYPE /BI0/OIVOLUME,
* InfoObject: 0VOLUMEUNIT Volume unit.
VOLUMEUNIT TYPE /BI0/OIVOLUMEUNIT,
* InfoObject: 0FISCPER3 Posting period.
FISCPER3 TYPE /BI0/OIFISCPER3,
* InfoObject: 0SALES_DIST Sales District.
SALES_DIST TYPE /BI0/OISALES_DIST,
* InfoObject: 0BILL_TYPE Billing type.
BILL_TYPE TYPE /BI0/OIBILL_TYPE,
* InfoObject: 0MOVE_PLANT Receiving Plant/Issuing Plant.
MOVE_PLANT TYPE /BI0/OIMOVE_PLANT,
* InfoObject: 0SHIP_COND Shipping conditions.
SHIP_COND TYPE /BI0/OISHIP_COND,
* InfoObject: 0AB_RFBSK Status for Transfer to Accounting.
AB_RFBSK TYPE /BI0/OIAB_RFBSK,
* InfoObject: 0AB_FKSTO Indicator: Document Is Cancelled.
AB_FKSTO TYPE /BI0/OIAB_FKSTO,
* InfoObject: 0CUST_GRP5 Customer Group 5.
CUST_GRP5 TYPE /BI0/OICUST_GRP5,
* InfoObject: ZCU_COND1 Constomer Condition Group 1.
/BIC/ZCU_COND1 TYPE /BIC/OIZCU_COND1,
* InfoObject: ZCU_COND2 Customer Condition Group 2.
/BIC/ZCU_COND2 TYPE /BIC/OIZCU_COND2,
* InfoObject: ZBATCHCD Batch Code.
/BIC/ZBATCHCD TYPE /BIC/OIZBATCHCD,
* InfoObject: 0BATCH Batch number.
BATCH TYPE /BI0/OIBATCH,
* InfoObject: ZBATCH Batch number.
/BIC/ZBATCH TYPE /BIC/OIZBATCH,
* Field: RECORD Data record number.
RECORD TYPE RSARECORD,
END OF _ty_s_TG_1_full.
* Additional declaration for update rule interface
DATA:
MONITOR type standard table of rsmonitor WITH HEADER LINE,
MONITOR_RECNO type standard table of rsmonitors WITH HEADER LINE,
RECORD_NO LIKE SY-TABIX,
RECORD_ALL LIKE SY-TABIX,
SOURCE_SYSTEM LIKE RSUPDSIMULH-LOGSYS.
* global definitions from update rules
* TABLES: ...
DATA: IN TYPE F,
OUT TYPE F,
DENOM TYPE F,
NUMER TYPE F.
* Def. of 'credit-documents': following doc.categ. are 'credit docs'
* reversal invoice (N)
* credit memo (O)
* internal credit memo (6)
* Credit-documents are delivered with negative sign. Sign is switched
* to positive to provide positive key-figures in the cube.
* The combination of characteristics DE_CRED and DOC-CLASS provides
* a comfortable way to distinguisch e.g. positive incoming orders or
* order returns.
* Def. der 'Soll-Dokumente': folgende Belegtypen sind 'Soll-Belege'
* Storno Rechnung (N)
* Gutschrift (O)
* Interne Verrechn. Gutschr. (6)
* Soll-Dokumente werden mit negativem Vorzeichen geliefert. Um die Kenn-
* zahlen positiv in den Cube zu schreiben, wird das Vorzeich. gedreht
* Die Kombination der Merkmale DEB_CRED und DOC-CLASS gibt Ihnen die
* Möglichkeit schnell z.B. zwischen Auftrags-Eingang oder Retouren zu
* unterscheiden.
DATA: DEB_CRED(3) TYPE C VALUE 'NO6'.
FORM routine_0002
TABLES
P_MONITOR structure rsmonitor
CHANGING
RESULT TYPE _ty_s_TG_1_full-DOCUMENTS
RETURNCODE LIKE sy-subrc
ABORT LIKE sy-subrc
RAISING
cx_sy_arithmetic_error
cx_sy_conversion_error.
* init variables
* fill the internal table "MONITOR", to make monitor entries
CLEAR RESULT.
RESULT = COMM_STRUCTURE-NO_INV.
IF COMM_STRUCTURE-DOC_CATEG CA DEB_CRED.
RESULT = RESULT * ( -1 ).
ENDIF.
RETURNCODE = 0.
p_monitor[] = MONITOR[].
CLEAR:
MONITOR[].
ENDFORM. "routine_0002
FORM routine_0003
TABLES
P_MONITOR structure rsmonitor
CHANGING
RESULT TYPE _ty_s_TG_1_full-DEB_CRED
RETURNCODE LIKE sy-subrc
ABORT LIKE sy-subrc
RAISING
cx_sy_arithmetic_error
cx_sy_conversion_error.
* init variables
* fill the internal table "MONITOR", to make monitor entries
IF COMM_STRUCTURE-DOC_CATEG CA DEB_CRED.
RESULT = 'C'.
ELSE.
RESULT = 'D'.
ENDIF.
RETURNCODE = 0.
p_monitor[] = MONITOR[].
CLEAR:
MONITOR[].
ENDFORM.
Error:
E:Field "COMM_STRUCTURE-NO_INV" is unknown. It is neither in one of the
specified tables nor defined by a "DATA" statement. "DATA" statement.
communication structure chaged to sours fields but no uses , please suggest how can i proceed , Thanks in Advance immediate replay
Thanks & Regards
Ramesh GHi Gareth,
You have two options:
1. Transport from BW 3.1 to BI 7.0. You'll need to create a transport route between both systems. This may cause you some troubles in the future when you want to modify the objects you transported.
2. As there are few objects, you can use XML export utility from Transport connection. There, you make an XML file with the objects you need to transport. One thing that you may take care of, in this option, is that the business content objects you are exporting need to be activated in the destination system. Another problem is that querys are not exported.
Since it's only a cube, maybe you can create the objects manually. Look that in BI 7.0 there are several new functionalities, i don't know how transport or xml export would work.
Hope this helps.
Regards,
Diego -
Any In depth documentation of the extracting from R3 as in how RFC is used and general architecture design of how R3 connection is established( involvement of RFC)
And a question that i had
We use Shared Directory Access option on SAP data store
SAP Working Directory - Work\Dir
Application Shared Directory -
servername\foldername
So the architecture we are currently using we have a middleware which handles all the transport
So for ex and R3 data flow writes .dat file on the SAP working Directory once its done we have an ABAP code that we run /ose38 to move the file from the working directory to the out directory and there is a shuttle that moves the file to the application shared directory so it is not a one step process
One step does the extract and after the shuttle moves it to the shared directory the other job reads the flat file from the shared directory
Now they want to change the architecture and use as a one step process.
I want to know what kind of access is needed on the Working directory to move the files to the application shared directory? Any special access
Does the working directory needs a write access to the shared directory?
Does DI handle the movement from Working directory to the shared directory or is there any function that needs to be run on the SAP to make this work?Sounds like a custom_transfer method in case you find an exe that you can call which itself transports the file.
https://wiki.sdn.sap.com/wiki/display/BOBJ/ChosingtheTransport+Method -
Hello experts!
We want to design our data-flow in a new way and I want to know from some experts, if this is a good way of if there maybe is a better way.
At the moment we extract over the following InfoSources:
0CO_OM_WBS_1
0CO_OM_WBS_6
0CO_OM_OPA_1
0CO_OM_OPA_6
We extract from every data source in full load in up to 8 different InfoCubes. Everytime we update different attributes and selections with a lot of self written ABAP-rules and so on. All historical grown.
Now we don't want to read the data every time from DataSource into BW. So we say, we need an entry layer end update all the data from inside the BW.
We know choose an InfoCube as update-layer, because we maybe want to report on this new data befor it is update to the data targets. We also want to delete requests which could be difficult with an ODS-Object. An ODS-Object has also just 16 key-fields which could be a problem.
So know we want to design an InfoCube for every InfoSource and make data mart update to the other InfoCubes ( data targets).
What we also want is a additional layer for saving some requests sometimes. At the moment I would say, I just make a copy of the InfoCubes I want to design.
So thats it. For design of the InfoCubes I have an additional thread:
InfoCube Design
I have just around one year of BW-experience and never done this before. Maybe someone can give me some tipps or hints if this is a good way or if there is a better one.
Thanks in advance and best regards,
PeterHi
You want to update your Entry layer of Cubes with four Infosources
From this Datamart, your going to update second layer of Cubes which will be used for reporting.
Instead building this many cubes, you can create a historical cube which will store all the data of the past and one current cube which will have only new data just arrived.
In the historical cube, you can have all your requests and other details as your passing data only from the current cube to the historical.
From the new cube you can do current reporting only
You can build a Multicube over the historical cube and current cube to have full reporting
Regards
Ganesh N
This will make design simple and efficient instead of creating so many cubes
Regards
Ganesh N
Maybe you are looking for
-
Hi we are a soft drink manufacturer, we have batch management for raw materials but also need it for finished goods. OK so no problem works well so far. However we do send out trucks with product on board, those truck may not sell all of the produc
-
Sequence cannot be opened in Premiere Pro 5 after installing AE5.5 Trial
I'm travelling with my macbook pro and Master Collection CS5 A couple weeks ago After Effects crapped out on me, saying Quicktime wasn't installed. Days of delay while I searched forums and the intertubes. A quick workaround suggested by a friend s
-
Sumtotal error 404 - file or directory not found
Hi, We have a SCORM 1.2 compliant content created using Captivate 5. It's a multiple SCO (6 modules) combined into a single SCORM package. When the package was loaded in Sumtotal LMS, one of the modules alone does not launch and throws the below erro
-
In SAP-BW what is ROOT Analysis?
what are the steps to be followd in root analysis? thanx. siva
-
Ipod fonts got too large at startup?
My ipod fonts just got too big at startup and makes it extremely difficult to unlock or operate applications.