Seperating data sets
Hi All
I have a table in which the column is END_TIME. The problem is the data is a mix of hours and 15 minute intervals like below
BI_ID TR_ID END_TIME
122574 143399 1315
122575 143399 1330
122576 143399 1345
122577 143399 1400
122578 143399 1415
122579 143399 1430
122580 143399 1445
122581 143399 1500
122582 143399 1515
122583 143399 1530
122584 143399 1545
122585 143399 1600
122586 143399 1615
122587 143399 1630
122588 143399 1645
1013671 79331 1300
1013672 79331 1400
1013673 79331 1500
1013674 79331 1600
1013675 79331 1700
1013676 79331 1800
1013677 79331 1900
1013678 79331 2000
1013679 79331 2100
1013680 79331 2200
1013681 79331 2300
1013682 79331 2400
1013683 79331 100
1013684 79331 200
1013685 79331 300
1013686 79331 400
1013687 79331 500
1013688 79331 600
1013689 79331 700Can any one please advice me as how to seperate the records the 15 minute interval records from the whole set of records. I have to load all the records which are in the 15 minute intervals into a seperate table.
Thanks
Diff is null because you did not define starting point, or set nvl(lag, ...) for the first record.
First record does not have previous record.
nvl(
( LAG ( END_INTV_TIME , 1) OVER (PARTITION BY TRAFFIC_SAMPLE_ID ORDER BY BIN_DATA_ID )
, end_time) - END_INTV_TIME
) as diff
or
( LAG ( END_INTV_TIME , 1, 0) OVER (PARTITION BY TRAFFIC_SAMPLE_ID ORDER BY BIN_DATA_ID )
- END_INTV_TIME
) as diff
Similar Messages
-
Spry xml data set, accessing specific rows
Hello. I've been trying to build a website using Spry XML
Data Sets, and while I've accomplished my goals for now, I don't
think the solution I came up with is the best.
The website consists of several areas that show projects.
Each project has several fields that are to be filled with content
retrieved from the xml files, but the projects are not all exactly
alike and some have specific fields that others don't require.
All the info is available in several languages, so for now
I've created an xml file for each one. An xml file could be like
Code Part1. (Why I can't add several code snippets along the post
baffles me. I mean, I can't, right?)
This dataset, for simplicity purposes, is not dependent on
the language (Code Part2).
And then there are the content areas (Code Part3).
So as you see, each project has its own structure. This makes
using spry:repeat a not very effective method for filling in all
the content. Ideally I should be able to access each row in the
dataset through some sort of value, like id, or one of its
children's values. The ds_RowID depends on the row order, so unless
there's another way to use it, it doesn't solve my problem.
Here's what I've come up with (Code Part4).
This works (in FF3 OSX, at least), although there are some
other problems that might make it necessary to create a spry:region
(or at least use spry:repeat) for each field. Anyway, it sounds
silly and wasteful to go through every row of the dataset everytime
for each of the fields that need to be filled.
My hope is that I'm ignorant of some much better method of
achieving my goals, something more direct and elegant.
Can anyone help me out with this? Thank you very much in
advance.Hi there
You are indeed absolutely correct a spry region should have been shown, my appologise for that, the code is wrapped in a standard spry region.
That being said i have used a work around in the SQL SELECT statement of the xmlExportObj, Recordset to find the information required without having to do any IF, ELSE on the page.
Many thanks for your reply and for pointing out my mistake in how I had presented my question.
My next question is to follow seperately
Regards
Ray -
Parts of the Data went missing when linking two data sets
Hello Everyone!
I have two data sets for my report:
1. an essbase cube which is transformed to a relational star shema transformed by BI Administraion, Im calling the cube with sql (not mdx)
2. a single relational table
In the report Im iterating on the 'product' dimension of the cube and I need to join the cube with the single relational table, because that table holds all the detailed information on each product (mostly text therefore it is not in the cube).
What I did was, joined the two data sets with a Master/Detail link. So far so good. But the thing is, more than half of my data goes missing in the xml-Output when I join those too. If I delete the join, all the data is back again in the xml-ouptut but I cant refer to the details of the relational table.
I tried different things to fix that problem. So I sorted the 1. dataset and joined it again with the 2. dataset. As a result there was still a lot of data missing, but not the same as before.
I find this behaviour really strange and don't have a clue what to do differently.
I would really appreciate some help, because I already spent hours trying to fix that problem.
Thank you!
Edited by: 981323 on 27.02.2013 06:49Thanks for that. Typically I found this 5 minutes after I posted the request on the forum.
Problem is every time I try and do this the system does not recognise my <?for-each> containing the parameter and says the parameter is not defined. WHen I validate the template it seems to ignore the F completely and complains about to many ends.
I just can't see why it does not work. I am using four records, 2 of each set I am using which link over item number. I can show them seperately but as soon as I add the variable loop ( <?for-each:/INV_GL/GL[GLLITM= $ITM]?> where ITM is my variable and GLITM is that field in the data) it stops working. -
Data sets and text encoding problem
I have a problem when trying to import french text variables into my data sets (for automated generation of lower thirds). I can not get PS to display french special characters correct. all 'accented' As and Es etc. display as weird text strings, just like your browser running on the wrong text encoder.
How do I get PS to interpret the data sets right? Any idea?
thanx
Gref
( PS CS6 (13.0.1), Mac Pro running on OS X 10.7.3)Thanx Bill.
Unfortunately I cannot change the font as it is corporate. It has all the characters I need, though.
Did I mention, that I have to generate french AND german subs? No.
Well I tackled the german versions by processing the textfiles I get with textwrangler saving them as UTF-16. That worked with the german versions.
Unfortunately I ran into 2 other problems now:
First problem:
The data set consists of 7 names and their respective functions. This processes perfectly in german (as I thought to this point) but in the french version it processes perfectly for the first 4 data sets while the fitfth has the right name but the function variable now displays the 5th function AND all the rest of the names and functions. I can not get these data sets displayed seperately. Bummer.
But even more annoying…
Second problem:
When I now import my perfect german lower thirds into Avid I seem to loose my alpha information.
Avid is supereasy to use with alpha: you have 3 choices: inverted (which is the usual way of importing) having black in the alpha transparent. normal - white transparent or ignore - no transparency.
Importing 'inverted' alpha always worked (I use Avid for about 15 years now). but these processed PSDs don't. No alpha.
So I tried to assign an alpha channel via Actions to these files. Now Avid seems to discard the layers completely leaving me with white text where it should be black. The PSDs have black character text layers but in Avid the characters are white. It seems like AVID just renders the white pixels of the alpha channel.
Assigning the Alpha is no option anyway, as the whole process should make the generation of these lower third EASIER and faster for the person that has to make every lower third by hand by now.
All of this can be boiled down to one word: ARGH! -
Hello all, gotta question that I've been wresteling with for the better part of 2 days now. I'm running several filters on a very large data set. Each row in the data set is a product that has 8 or so properties. There are about 900 rows and each row has a SKU out of the 900 rows there are only about 50 unique SKUs. Right now every row is being displayed but what I want to do is only display each SKU once. I tried selecting 'remove repeating nodes" from the UI but this doesn't work because the entire nodes aren't repeating just the SKUs.
Is this something that is easily doable?
You can view the working application here (note: runs best in FF3, I need to trim down the DS to get it to work well in IE)
www.bradygodwin.com/test/rnaToolNewData.html
and the XML here
http://www.bradygodwin.com/test/rnaindev.xml
Thanks in advance for the help!Hi there
You are indeed absolutely correct a spry region should have been shown, my appologise for that, the code is wrapped in a standard spry region.
That being said i have used a work around in the SQL SELECT statement of the xmlExportObj, Recordset to find the information required without having to do any IF, ELSE on the page.
Many thanks for your reply and for pointing out my mistake in how I had presented my question.
My next question is to follow seperately
Regards
Ray -
Open data set and close data set
hi all,
i have some doubt in open/read/close data set
how to transfer data from internal table to sequential file, how we find sequential file.
thanks and regards
chaitanyaHi Chaitanya,
Refer Sample Code:
constants: c_split TYPE c
VALUE cl_abap_char_utilities=>horizontal_tab,
c_path TYPE char100
VALUE '/local/data/interface/A28/DM/OUT'.
Selection Screen
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
PARAMETERS : rb_pc RADIOBUTTON GROUP r1 DEFAULT 'X'
USER-COMMAND ucomm, "For Presentation
p_f1 LIKE rlgrap-filename
MODIF ID rb1, "Input File
rb_srv RADIOBUTTON GROUP r1, "For Application
p_f2 LIKE rlgrap-filename
MODIF ID rb2, "Input File
p_direct TYPE char128 MODIF ID abc DEFAULT c_path.
"File directory
SELECTION-SCREEN END OF BLOCK b1.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_f1.
*-- Browse Presentation Server
PERFORM f1000_browse_presentation_file.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_f2.
*-- Browse Application Server
PERFORM f1001_browse_appl_file.
AT SELECTION-SCREEN OUTPUT.
LOOP AT SCREEN.
IF rb_pc = 'X' AND screen-group1 = 'RB2'.
screen-input = '0'.
MODIFY SCREEN.
ELSEIF rb_srv = 'X' AND screen-group1 = 'RB1'.
screen-input = '0'.
MODIFY SCREEN.
ENDIF.
IF screen-group1 = 'ABC'.
screen-input = '0'.
MODIFY SCREEN.
ENDIF.
ENDLOOP.
*& Form f1000_browse_presentation_file
Pick up the filepath for the file in the presentation server
FORM f1000_browse_presentation_file .
CONSTANTS: lcl_path TYPE char20 VALUE 'C:'.
CALL FUNCTION 'WS_FILENAME_GET'
EXPORTING
def_path = lcl_path
mask = c_mask "',.,..'
mode = c_mode
title = text-006
IMPORTING
filename = p_f1
EXCEPTIONS
inv_winsys = 1
no_batch = 2
selection_cancel = 3
selection_error = 4
OTHERS = 5.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
flg_pre = c_x.
ENDIF.
ENDFORM. " f1000_browse_presentation_file
*& Form f1001_browse_appl_file
Pick up the file path for the file in the application server
FORM f1001_browse_appl_file .
DATA: lcl_directory TYPE char128.
lcl_directory = p_direct.
CALL FUNCTION '/SAPDMC/LSM_F4_SERVER_FILE'
EXPORTING
directory = lcl_directory
filemask = c_mask
IMPORTING
serverfile = p_f2
EXCEPTIONS
canceled_by_user = 1
OTHERS = 2.
IF sy-subrc <> 0.
MESSAGE e000(zmm) WITH text-039.
flg_app = 'X'.
ENDIF.
ENDFORM. " f1001_browse_appl_file
*& Form f1003_pre_file
Upload the file from the presentation server
FORM f1003_pre_file .
DATA: lcl_filename TYPE string.
lcl_filename = p_f1.
IF p_f1 IS NOT INITIAL.
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
filename = lcl_filename
filetype = 'ASC'
has_field_separator = 'X'
TABLES
data_tab = i_input
EXCEPTIONS
file_open_error = 1
file_read_error = 2
no_batch = 3
gui_refuse_filetransfer = 4
invalid_type = 5
no_authority = 6
unknown_error = 7
bad_data_format = 8
header_not_allowed = 9
separator_not_allowed = 10
header_too_long = 11
unknown_dp_error = 12
access_denied = 13
dp_out_of_memory = 14
disk_full = 15
dp_timeout = 16
OTHERS = 17.
IF sy-subrc <> 0.
MESSAGE s000 WITH text-031.
EXIT.
ENDIF.
ELSE.
PERFORM populate_error_log USING space
text-023.
ENDIF.
ENDFORM. " f1003_pre_file
*& Form f1004_app_file
upload the file from the application server
FORM f1004_app_file .
REFRESH: i_input.
OPEN DATASET p_f2 IN TEXT MODE ENCODING DEFAULT FOR INPUT.
IF sy-subrc EQ 0.
DO.
READ DATASET p_f2 INTO wa_input_rec.
IF sy-subrc EQ 0.
*-- Split The CSV record into Work Area
PERFORM f0025_record_split.
*-- Populate internal table.
APPEND wa_input TO i_input.
CLEAR wa_input.
IF sy-subrc <> 0.
MESSAGE s000 WITH text-030.
EXIT.
ENDIF.
ELSE.
EXIT.
ENDIF.
ENDDO.
ENDIF.
ENDFORM. " f1004_app_file
Move the assembly layer file into the work area
FORM f0025_record_split .
CLEAR wa_input.
SPLIT wa_input_rec AT c_split INTO
wa_input-legacykey
wa_input-bu_partner
wa_input-anlage.
ENDFORM. " f0025_record_split
Reward points if this helps.
Manish -
Hi,
"Report Builder is a report authoring environment for business users who prefer to work in the Microsoft Office environment.
You work with one report at a time. You can modify a published report directly from a report server. You can quickly build a report by adding items from the Report Part Gallery provided by report designers from your organization." - As mentioned
on TechNet.
I wonder how a non-technical business analyst can use Report Builder 3 to create ad-hoc reports/analysis with list of parameters based on other data sets.
Do they need to learn TSQL or how to add and link parameter in Report Builder? then How they can add parameter into a report. Not sure what i am missing from whole idea behind Report builder then?
I have SQL Server 2012 STD and Report Builder 3.0 and want to train non-technical users to create reports as per their need without asking to IT department.
Everything seems simple and working except parameters with list of values e.g. Sales year List, Sales Month List, Gender etc. etc.
So how they can configure parameters based on Other data sets?
Workaround in my mind is to create a report with most of columns and add most frequent parameters based on other data sets and then non-technical user modify that report according to their needs but that way its still restricting users to
a set of defined reports?
I want functionality like "Excel Power view parameters" into report builder which is driven from source data and which is only available Excel 2013 onward which most of people don't have yet.
So how to use Report Builder. Any other thoughts or workaround or guide me the purpose of Report Builder, please let me know.
Many thanks and Kind Regards,
For quick review of new features, try virtual labs: http://msdn.microsoft.com/en-us/aa570323Hi Asam,
If we want to create a parameter depend on another dataset, we can additional create or add the dataset, embedded or shared, that has a query that contains query variables. Then use the option that “Get values from a
query” to get available values. For more details, please see:http://msdn.microsoft.com/en-us/library/dd283107.aspx
http://msdn.microsoft.com/en-us/library/dd220464.aspx
As to the Report Builder features, we can refer to the following articles:http://technet.microsoft.com/en-us/library/hh213578.aspx
http://technet.microsoft.com/en-us/library/hh965699.aspx
Hope this helps.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Hi,
In release note of 9.1 it is mentioned that :
Display of all OIM User attributes on the Step 3: Modify Connector Configuration page
On the Step 3: Modify Connector Configuration page, the OIM - User data set now shows all the OIM User attributes. In the earlier release, the display of fields was restricted to the ones that were most commonly used.
and
Attributes of the ID field are editable
On the Step 3: Modify Connector Configuration page, you can modify some of the attributes of the ID field. The ID field stores the value that uniquely identifies a user in Oracle Identity Manager and in the target system.
Can anyone please guide me how to get both things as I am getting only few fields of user profile in OIM-USer data set and also not able to modify ID field.
I am using OIM 9.1 on Websphere application server 6.1
ThanksUnfortunately i do not have experience using the SPML generic connector. Have you read through all the documentation pertaining to the GTC?
-Kevin -
Multiple data sets: a common global dataset and per/report data sets
Is there a way to have a common dataset included in an actual report data set?
Case:
For one project I have about 70 different letters, each letter being a report in Bi Publisher, each one of them having its own dataset(s).
However all of these letters share a common standardized reference block (e.g. the user, his email address, his phone number, etc), this common reference block comes from a common dataset.
The layout of the reference block is done by including a sub-llayout (rtf-file).
The SQL query for getting the dataset of the reference block is always the same, and, for now, is included in each of the 70 reports.
Ths makes maintenance of this reference block very hard, because each of the 70 reports must be adapted when changes to the reference block/dataset are made.
Is there a better way to handle this? Can I include a shared dataset that I would define and maintain only once, in each single report definition?Hi,
The use of the subtemplate for the centrally managed layout, is ok.
However I would like to be able to do the same thing for the datasets in the reports:
one centrally managed data set (definition) for the common dataset, which is dynamic!, and in our case, a rather complex query
and
datasets defined on a per report basis
It would be nice if we could do a kind of 'include dataset from another report' when defining the datasets for a report.
Of course, this included dataset is executed within each individual report.
This possibility would make the maintenance of this one central query easier than when we have to maintain this query in each of the 70 reports over and over again. -
SQL Update a Single Row Multiple Times Using 2 Data Sets
I'm working in tsql and have an issue where I need to do multiple updates to a single row based on multiple conditions.
By Rank_
If the column is NULL I need it to update no matter what the Rank is.
If the Ranks are the same I need it to update in order of T2_ID.
And I need it to use the last updated output.
I've tried using the update statement below but it only does the first update and the rest are ignored. Here is an example of the data sets i'm working w/ and the Desired results. Thanks in advance!
update a
set Middle = case when a.Rank_> b.Rank_ OR a.Middle IS NULL then ISNULL(b.Middle,a.Middle) end,
LName = case when a.Rank_> b.Rank_ OR a.Lname IS NULL then ISNULL(b.LName,a.LName) end,
Rank_ = case when a.Rank_> b.Rank_ then b.Rank_ end
from #temp1 a
inner join #temp2 b on a.fname = b.fname
where b.T2_ID in (select top 100% T2_ID from #temp2 order by T2_ID asc)The Merge clause actually errors because it attempt to update the same record. I think this CTE statement is the closest I've come but I'm still working through it as I'm not too familiar w/ them. It returns multiple rows which I will have to
insert into a temp table to update since the resulting row I need is the last in the table.
;WITH cteRowNumber
AS(
Select DISTINCT
Row_Number() OVER(PARTITION BY a.LName ORDER BY a.LName ASC, a.Rank_ DESC,b.T2ID ASC) AS RowNumber
,a.FName
,a.LName
,b.LName as xLname
,a.MName
,b.MName AS xMName
,a.Rank_
,b.Rank_ AS xRank
,b.T2ID
FROM #temp1 a
inner join #temp2 b
ON a.fname = b.fname
), cteCursor
AS(
Select a.RowNumber,
a.Fname
,a.LName
,a.xLname
,a.MName
,a.xMName
,a.xRank
,a.T2ID
,CASE WHEN a.Rank_ >= a.xRank THEN ISNULL(a.xRank,a.Rank_) else ISNULL(a.Rank_,a.xRank) end AS Alt_Rank_
,CASE WHEN a.Rank_ >= a.xRank THEN ISNULL(a.xMName,a.MName) else ISNULL(a.MName,a.xMName) end AS Alt_MName
,CASE WHEN a.Rank_ >= a.xRank THEN ISNULL(a.xLName,a.lname) else ISNULL(a.LName,a.xlname) end as Alt_Lname
FROM cteRowNumber a
where a.RowNumber = 1
UNION ALL
Select crt.RowNumber
,crt.FName
,crt.LName
,crt.xLname
,crt.MName
,crt.xMName
,crt.xRank
,crt.T2ID
,CASE WHEN Prev.Alt_Rank_ >= crt.xRank THEN ISNULL(crt.xRank,Prev.Alt_Rank_) else ISNULL(Prev.Alt_Rank_,crt.xRank) end AS Alt_Rank
,CASE WHEN Prev.Alt_Rank_ >= crt.xRank THEN ISNULL(crt.xMName,Prev.Alt_MName) else ISNULL(Prev.Alt_MName,crt.xMName) end AS Alt_MName
,CASE WHEN Prev.Alt_Rank_ >= crt.xRank THEN ISNULL(crt.xLName,Prev.Alt_Lname) else ISNULL(Prev.Alt_Lname,crt.xLName) end as Alt_Lname
FROM cteCursor prev
inner join cteRowNumber crt
on prev.fname = crt.fname and prev.RowNumber + 1 = crt.RowNumber
SELECT cte.*
FROM cteCursor cte -
OBIEE 11g BI Publisher; New Data Set Creation Error "Failed to load SQL"
Hi,
I'm trying to create a new SQL data set (from a client machine). I use "query builder" to build the data set. But when I click "OK", it fires the error "Failed to load SQL".
But strangely, if connect to the OBIEE server's desktop and create the data set it works without any issues. I wonder this would be a firewall issue. If so what are the ports I should open.
It's a enterprise installation. And we have already open 9703, 9704, and 9706.
Has anyone came across such a situation?Talles,
First of all you might have more chance of getting a response over in the BIP forum. Other than that all I can think of is: is your MS SQL Server running with mixed mode auth? -
Exception Handling for OPEN DATA SET and CLOSE DATA SET
Hi ppl,
Can you please let me know what are the exceptions that can be handled for open, read, transfer and close data set ?
Many Thanks.HI,
try this way....
DO.
TRY.
READ DATASET filename INTO datatab.
CATCH cx_sy_conversion_codepage cx_sy_codepage_converter_init
cx_sy_file_authority cx_sy_file_io cx_sy_file_open .
ENDTRY.
READ DATASET filename INTO datatab.
End of changes CHRK941728
IF sy-subrc NE 0.
EXIT.
ELSE.
APPEND datatab.
ENDIF.
ENDDO. -
Download using open data set and close data set
can any body please send some sample pgm using open data set and close data set .the data should get downloaded in application server
very simple pgm neededHi Arun,
See the Sample code for BDC using OPEN DATASET.
report ZSDBDCP_PRICING no standard page heading
line-size 255.
include zbdcrecx1.
*--Internal Table To hold condition records data from flat file.
Data: begin of it_pricing occurs 0,
key(4),
f1(4),
f2(4),
f3(2),
f4(18),
f5(16),
end of it_pricing.
*--Internal Table To hold condition records header .
data : begin of it_header occurs 0,
key(4),
f1(4),
f2(4),
f3(2),
end of it_header.
*--Internal Table To hold condition records details .
data : begin of it_details occurs 0,
key(4),
f4(18),
f5(16),
end of it_details.
data : v_sno(2),
v_rows type i,
v_fname(40).
start-of-selection.
refresh : it_pricing,it_header,it_details.
clear : it_pricing,it_header,it_details.
CALL FUNCTION 'UPLOAD'
EXPORTING
FILENAME = 'C:\WINDOWS\Desktop\pricing.txt'
FILETYPE = 'DAT'
TABLES
DATA_TAB = it_pricing
EXCEPTIONS
CONVERSION_ERROR = 1
INVALID_TABLE_WIDTH = 2
INVALID_TYPE = 3
NO_BATCH = 4
UNKNOWN_ERROR = 5
GUI_REFUSE_FILETRANSFER = 6
OTHERS = 7.
WRITE : / 'Condition Records ', P_FNAME, ' on ', SY-DATUM.
OPEN DATASET P_FNAME FOR INPUT IN TEXT MODE.
if sy-subrc ne 0.
write : / 'File could not be uploaded.. Check file name.'.
stop.
endif.
CLEAR : it_pricing[], it_pricing.
DO.
READ DATASET P_FNAME INTO V_STR.
IF SY-SUBRC NE 0.
EXIT.
ENDIF.
write v_str.
translate v_str using '#/'.
SPLIT V_STR AT ',' INTO it_pricing-key
it_pricing-F1 it_pricing-F2 it_pricing-F3
it_pricing-F4 it_pricing-F5 .
APPEND it_pricing.
CLEAR it_pricing.
ENDDO.
IF it_pricing[] IS INITIAL.
WRITE : / 'No data found to upload'.
STOP.
ENDIF.
loop at it_pricing.
At new key.
read table it_pricing index sy-tabix.
move-corresponding it_pricing to it_header.
append it_header.
clear it_header.
endat.
move-corresponding it_pricing to it_details.
append it_details.
clear it_details.
endloop.
perform open_group.
v_rows = sy-srows - 8.
loop at it_header.
perform bdc_dynpro using 'SAPMV13A' '0100'.
perform bdc_field using 'BDC_CURSOR'
'RV13A-KSCHL'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'RV13A-KSCHL'
it_header-f1.
perform bdc_dynpro using 'SAPMV13A' '1004'.
perform bdc_field using 'BDC_CURSOR'
'KONP-KBETR(01)'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
perform bdc_field using 'KOMG-VKORG'
it_header-f2.
perform bdc_field using 'KOMG-VTWEG'
it_header-f3.
**Table Control
v_sno = 0.
loop at it_details where key eq it_header-key.
v_sno = v_sno + 1.
clear v_fname.
CONCATENATE 'KOMG-MATNR(' V_SNO ')' INTO V_FNAME.
perform bdc_field using v_fname
it_details-f4.
clear v_fname.
CONCATENATE 'KONP-KBETR(' V_SNO ')' INTO V_FNAME.
perform bdc_field using v_fname
it_details-f5.
if v_sno eq v_rows.
v_sno = 0.
perform bdc_dynpro using 'SAPMV13A' '1004'.
perform bdc_field using 'BDC_OKCODE'
'=P+'.
perform bdc_dynpro using 'SAPMV13A' '1004'.
perform bdc_field using 'BDC_OKCODE'
'/00'.
endif.
endloop.
*--Save
perform bdc_dynpro using 'SAPMV13A' '1004'.
perform bdc_field using 'BDC_OKCODE'
'=SICH'.
perform bdc_transaction using 'VK11'.
endloop.
perform close_group.
Hope this resolves your query.
Reward all the helpful answers.
Regards -
What is open data set and close data set
what is open data set and close data set,
how to use the files in sap directories ?hi,
Open Dataset is used to read or write on to application server ... other than that i am not sure that there exists any way to do the same ... here is a short description for that
FILE HANDLING IN SAP
Introduction
Files on application server are sequential files.
Files on presentation server / workstation are local files.
A sequential file is also called a dataset.
Handling of Sequential file
Three steps are involved in sequential file handling
OPEN
PROCESS
CLOSE
Here processing of file can be READING a file or WRITING on to a file.
OPEN FILE
Before data can be processed, a file needs to be opened.
After processing file is closed.
Syntax:
OPEN DATASET <file name> FOR {OUTPUT/INPUT/APPENDING}
IN {TEXT/BINARY} MODE
This statement returns SY_SUBRC as 0 for successful opening of file or 8, if unsuccessful.
OUTPUT: Opens the file for writing. If the dataset already exists, this will place the cursor at the start of the dataset, the old contents get deleted at the end of the program or when the CLOSE DATASET is encountered.
INPUT: Opens a file for READ and places the cursor at the beginning of the file.
FOR APPENDING: Opens the file for writing and places the cursor at the end of file. If the file does not exist, it is generated.
BINARY MODE: The READ or TRANSFER will be character wise. Each time n characters are READ or transferred. The next READ or TRANSFER will start from the next character position and not on the next line.
IN TEXT MODE: The READ or TRANSFER will start at the beginning of a new line each time. If for READ, the destination is shorter than the source, it gets truncated. If destination is longer, then it is padded with spaces.
Defaults: If nothing is mentioned, then defaults are FOR INPUT and in BINARY MODE.
PROCESS FILE:
Processing a file involves READing the file or Writing on to file TRANSFER.
TRANSFER Statement
Syntax:
TRANSFER <field> TO <file name>.
<Field> can also be a field string / work area / DDIC structure.
Each transfer statement writes a statement to the dataset. In binary mode, it writes the length of the field to the dataset. In text mode, it writes one line to the dataset.
If the file is not already open, TRANSFER tries to OPEN file FOR OUTPUT (IN BINARY MODE) or using the last OPEN DATASET statement for this file.
IF FILE HANDLING, TRANSFER IS THE ONLY STATEMENT WHICH DOES NOT RETURN SY-SUBRC
READ Statement
Syntax:
READ DATASET <file name> INTO <field>.
<Field> can also be a field string / work area / DDIC structure.
Each READ will get one record from the dataset. In binary mode it reads the length of the field and in text mode it reads each line.
CLOSE FILE:
The program will close all sequential files, which are open at the end of the program. However, it is a good programming practice to explicitly close all the datasets that were opened.
Syntax:
CLOSE DATASET <file name>.
SY-SUBRC will be set to 0 or 8 depending on whether the CLOSE is successful or not.
DELETE FILE:
A dataset can be deleted.
Syntax:
DELETE DATASET <file name>.
SY-SUBRC will be set to 0 or 8 depending on whether the DELETE is successful or not.
Pseudo logic for processing the sequential files:
For reading:
Open dataset for input in a particular mode.
Start DO loop.
Read dataset into a field.
If READ is not successful.
Exit the loop.
Endif.
Do relevant processing for that record.
End the do loop.
Close the dataset.
For writing:
Open dataset for output / Appending in a particular mode.
Populate the field that is to be transferred.
TRANSFER the filed to a dataset.
Close the dataset.
Regards
Anver
if hlped pls mark points -
How to use open data set in SAP
Hi SAP Gurus,
Could anyone help, how to use open data set in SAP.
I need to upload a file from Application server (ZSAPUSAGEDATA) to internal table (IT_FINAL).
Thanks & Regards,
Krishnau2026Hi Krishna.
These are the steps you need to follow.
tables: specify the table.
data: begin of fs_...
end of fs_ " Structure Field string.
data: t_table like
standard table
of fs_...
data:
w_file TYPE string.
data:
fname(10) VALUE '.\xyz.TXT'.
select-options: if any.
PARAMETERS:
p_file LIKE rlgrap-filename.
w_file = p_file.
select .... statement
OPEN DATASET fname FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
*OPEN DATASET fname FOR OUTPUT IN BINARY MODE.
LOOP AT t_... INTO fs_....
write:/ .....
TRANSFER fs_... TO fname.
or
TRANSFER t_... TO fname
ENDLOOP.
CLOSE DATASET fname.
Reward points wisely and if you are benefitted or ask for more detailed explanation if problem not solved.
Regards Harsh.
Maybe you are looking for
-
Time Evaluation: Problem with Absence Quotas
Hi Gurus, When vacation days are keyed in advance, and at a later date are made inactive as a result of a LOA. What happens in those cases is that SAP still considers the absence valid and it remains as deducted from the vacation quotas. How can we p
-
Allow user to password protect a form
I have created a form that asks if the user wants to insert a password - a basic text field. If the password field is empty, then the form opens. If the password exists, then the user is asked for a password - if correct, the form opens for editing,
-
DP background jobs in local time zone
We are using APO DP in an international environment (SCM 4.1). We have setup up the system, that is using the local time zone (personal user settings), which is working fine in interactive planning (data views & macros). But when executing the macros
-
How to calculate IO on SQL Query.
Hi all, Could u please tell me how to calculate IO for specific SQL Query. Regards, Santosh.
-
I'm using a FlexRIO 7966R for digital signal manipulation and need to buffer data across clock domains. By buffer I mean I need to be able to store in memory a variable amount of data before it's read back out in order to achieve a data delay. I can