Understand OlapQueryLog Dataset
Hi everybody,
I would like to create manuals aggregations to improve some MDX requests. (in SSRS Reports)
When I'm reading OlapQueryLog table, i see this Dataset:
10000011001000000,00000000000000000000....
1000002300400....
The first dimension is a time dimension with these selected attributes:
Days
Months
Trimester
Year
=> All Attributes is designed like hierarchy into "Attribute Relationships" of the "Time dimension"
I create aggregation with "Day attribute" and it improve performance but not enough to speed up my request.
I create 3 another aggregations : One on "Day attribute", one on "Month", one on "Year" => Nothing change
I create another one with "Days", "Months", "Trimester" and "Year" (I don't worry about the BIDS warning on the unvalidated Aggregation) => Nothing change
My questions:
How to improve performance for this example of dataset?
Thanks all for your responses.
Arnaud
Hi Arnaud,
According to your description, you want to improve the performance of your MDX query for the SQL Server Reporting Services report dataset, right? In your scenario, since we didn't know the MDX query and the detail cube structure, it's hard to give you the
detail measure to improve the MDX performance. Here are some useful tips to improve the performance.
Remove empty tuples from your result set to reduce the time spent by the query execution engine serializing the result set.
Use EXISTS rather than filtering on member properties to avoid a slow execution path.
Filter a set before using it in a crossjoin to reduce the cube space before performing the crossjoin.
Use Non_Empty_Behavior where possible to enable the query execution engine to use bulk evaluation mode.
Reference:
Top 3 Simplest Ways To Improve Your MDX Query
Ways of Improving MDX Performance and Improvements with MDX in Katmai (SQL 2008)
Regards,
Charlie Liao
TechNet Community Support
Similar Messages
-
Like to understand - Stream DataSet (of XML data) to ZIP file?
i have similar kind of requirement where i need to save dataset data in multi part zip file. from this site i got a sample code
.NET - Stream DataSet (of XML data) to ZIP file? but few area is not clear.
the sample code as follows
// get connection to the database
var c1= new System.Data.SqlClient.SqlConnection(connstring1);
var da = new System.Data.SqlClient.SqlDataAdapter()
SelectCommand= new System.Data.SqlClient.SqlCommand(strSelect, c1)
DataSet ds1 = new DataSet();
// fill the dataset with the SELECT
da.Fill(ds1, "Invoices");
// write the XML for that DataSet into a zip file (split into 1mb chunks)
using(Ionic.Zip.ZipFile zip = new Ionic.Zip.ZipFile())
zip.MaxOutputSegmentSize = 1024*1024;
zip.AddEntry(zipEntryName, (name,stream) => ds1.WriteXml(stream) );
zip.Save(zipFileName);
how this word code Ionic they are using dotnet zip library ? tell me what is
Ionic
i guess this line zip.MaxOutputSegmentSize = 1024*1024; means each segment size would be 1mb ? am i right ?
this line is not clear
zip.AddEntry(zipEntryName, (name,stream) => ds1.WriteXml(stream) );
what is the meaning of this (name,stream) stream never declared but how they can use ?
tell me what is the meaning of this line zip.AddEntry(zipEntryName, (name,stream) => ds1.WriteXml(stream) ); what the code will do reading the above statement.
suppose if the above code save dataset data in 5 separate zip file then what name will be used for each segment?
if possible give me the code which will read all segment file and extract its content in a folder. thanksIonic is a 3rd party library which can be download by click on the shortcut where you got the code. I don't recommend using compiled 3rd part software unless the source code is provided. Codeplex solutions are usually very reliable.
Yes 1024 x 1024 is the 1mbyte size of the stream
The library is using a Linq syntax. So simple Linq uses something like this
.Select(x => x.ToString()) Where x is a variable name that can be any letter(s) you choose. Select is part of an enumeration. It is really just a shortcut instead of using a "FOR" statement.
The line zip is more complicated version of Linq. It is using two parameters (name,stream). I don't think the code actually works and you would have to look at the documentation at the codeplex website.
jdweng -
Business Intelligence Center - User Training
I am the Project Server Administrator, but I'm a project manager with a business degree [not a DBA/IT professional]. My predecessor implemented the project portion of Project Server 2010, and I have to implement the Business Intelligence Center. I
know I need training, but there are so many options - everything from online guides to days-long classes in other countries. The online guides aren't helpful to me. Since I'm so new to Project Server and BIC, I need a more hands-on training approach.
However, cost is a factor and traveling out-of-state (let alone out-of-country) is cost-prohibitive. I'm in Michigan.
Any recommendations for BIC implementation training for a non-IT person?Julie
In your search, do not look for Microsoft Project or Project Server, rather look for Tech Training companies that specialize in SQL, SharePoint and Excel Services. Understand that the BI Center is pure SharePoint functionality. I give you a high level overview
of this in my Implementing and Administering book. The BI center uses Office Data Connection (ODC) files, which are Excel Workbook files that contain a connection string and either a SQL query, or an oData query (2013 and later). You connect to these using
Excel to retrieve the datasets. Microsoft provides a number of these out of the box to connect to the reporting database and the OLAP cubes.
If you've got the Excel skills, the you should be good to go. The next hurdle is understanding the dataset. For that you should download and install the Project 2010 SDK, which contains the data dictionary for the reporting store, should you need to change
an existing SQL query or create your own. Start by looking at the sample reports, which should give you some insight on how to handle project data.
Gary Chefetz, MCITP, MCP, MVP msProjectExperts
Project and Project ServerFAQs
Project Server Help BLOG -
I need to understand how to get my master-detail html dataset to refresh.
My SPRY master-detail dataset does not refresh. Following is a description of what I have and what I am trying to do.
The data source is an embedded table. Each row of the table has two fields: a thumbnail (for the master section) and a full size image (for the detail section). Each "tr" tag has a class which identifies the category of photos to be displayed. This feature of the applicaation is loacated on a panel of a SPRY tabbed panel object. I have links from other pages to open specific tabbed panels. this part works just fine.
To select the various categories of photos I have a radio button group. By default the first category.in the group is selected and when the page initially loads it selects the correct photos and the thumbnails are displayed in the master section and the first full size photo displays in the detail section. All is well to this point.
Each radio button has an "onClick" behavior which calls a javascript function with an argument specifying the desired category. The javascript function uses "window.location.assign" to refresh the page passing URL parameters to open the appopriate tab (this works properly) and to pass along the selected category. The URL parameters show up in the browser just as they should, but the page does not seem to refresh. I have disabled the caching of the data and that doesn't help. I put an alert() function in to examine the incoming URL parameters but it does not process at all. This convinces me that the page just isn't reloading.
Any help will be appreciated.I have something similar, but instead of clicking a radio button, I click the tab of a Spry TabbedPanel widget. Clicking the tab activates a function that changes the XPath and reloads the data. All very simple.
function newXPath(thepath) {
ds1.setXPath(thepath);
ds1.loadData();
Incidently the above is for a SpryXMLDataSet.
For a SpryHTMLDataSet you would use setDataSelector("theDataSelector") or setRowSelector("theRowSelector"). For more info see here http://labs.adobe.com/technologies/spry/articles/html_dataset/index.html
I hope this helps.
Ben
PS. We try to help as much as possible. The best way for us to help you is to dispense with long stories and to supply an online URL so that we we see the issue in context.
PPS. I must admit, your description was spot on. -
In order to dynamically display data in the Report Header based in the current record of the Dataset, we started using Shared Variables, we initially used ReportItems!SomeTextbox.Value, but we noticed that when SomeTextbox was not rendered in the body
(usually because a comment section grow to occupy most of the page if not more than one page), then the ReportItem printed a blank/null value.
So, a method was defined in the Code section of the report that would set the value to the shared variable:
public shared Params as String
public shared Function SetValues(Param as String ) as String
Params = Param
Return Params
End Function
Which would be called in the detail section of the tablix, then in the header a textbox would hold the following expression:
=Code.Params
This worked beautifully since, it now didn't mattered that the body section didn't had the SetValues call, the variable persited and the Header displayed the correct value. Our problem now is that when the report is being called in different threads with
different data, the variable being shared/static gets modified by all the reports being run at the same time.
So far I've tried several things:
- The variables need to be shared, otherwise the value set in the Body can't be seen by the header.
- Using Hashtables behaves exactly like the ReportItem option.
- Using a C# DLL with non static variables to take care of this, didn't work because apparently when the DLL is being called by the Body generates a different instance of the DLL than when it's called from the header.
So is there a way to deal with this issue in a multi thread safe way?
Thanks in advance!
Hi Angel,
Per my understanding that you want to dynamic display the group data in the report header, you have set page break based on the group, so when click to the next page, the report hearder will change according to the value in the group, when you are using
the shared variables you got the multiple thread safe problem, right?
I have tested on my local environment and can reproduce the issue, according to the multiple safe problem the better way is to use the harshtable behaves in the custom code, you have mentioned that you have tryied touse the harshtable but finally got
the same result as using the ReportItem!TextBox.Value, the problem can be cuased by the logic of the code that not works fine.
Please reference to the custom code below which works fine and can get all the expect value display on every page:
Shared ht As System.Collections.Hashtable = New System.Collections.Hashtable
Public Function SetGroupHeader( ByVal group As Object _
,ByRef groupName As String _
,ByRef userID As String) As String
Dim key As String = groupName & userID
If Not group Is Nothing Then
Dim g As String = CType(group, String)
If Not (ht.ContainsKey(key)) Then
' must be the first pass so set the current group to group
ht.Add(key, g)
Else
If Not (ht(key).Equals(g)) Then
ht(key) = g
End If
End If
End If
Return ht(key)
End Function
Using this exprssion in the textbox of the reportheader:
=Code.SetGroupHeader(ReportItems!Language.Value,"GroupName", User!UserID)
Links belowe about the hashtable and the mutiple threads safe problem for your reference:
http://stackoverflow.com/questions/2067537/ssrs-code-shared-variables-and-simultaneous-report-execution
http://sqlserverbiblog.wordpress.com/2011/10/10/using-custom-code-functions-in-reporting-services-reports/
If you still have any problem, please feel free to ask.
Regards
Vicky Liu -
How do I use FILE_GET_NAME and make my resulting dataset name unique?
Okay, here's a case where I have a bunch of pieces to the puzzle -- a little knowledge here, a little knowledge there -- but I'm having trouble putting them together.
I am working on an RFC that is called by XI as part of an interface. This interface will execute every 15 minutes. As part of the RFC's execution (which is very simple and straight-forward) I would like to write out a dataset of the processing results. I have already learned how to use the OPEN DATASET, TRANSFER, and CLOSE DATASET commands, so I'm good to go there.
Here's what I'd like to do: Because this can run every 15 minutes, I don't want to keep overwriting my dataset file with the latest version. I'd like to keep the dataset name unique so it doesn't happen. Obviously, the first thought that comes to mind is adding a date/time stamp to the file name, but I'm not sure how -- or the best way -- to do this.
Also, I was told I should put the file -- for now -- into the DIR_DATA directory. I had no idea what this meant until I was told about t-code "FILE" and that this was the logical file name. Someone in-house thought I'd need to use a function called FILE_GET_NAME to make things easier.
Okay, so I need to use FILE_GET_NAME apparently, somehow plugging in DIR_DATA as the directory I need, and I want the resulting file name to have the date/time added at run time. I'm thinking when it comes to batch processing and writing out datasets, this has to be something that someone's already "paved the road" doing. Has anyone done this? Do you have a little slice of code doing just this that you could post? This would go a long way toward helping me understand how this "fits" together, and I would greatly appreciate any help you can provide.
As always, points awarded to all helpful answers. Thank you so much!hey,
here is the brief description of logical & physical path.
in the physical path, we will give total path of the file,where the file is located actually in the server.
for example : /INT/D01/IN/MYFILE.
this is the physical path in my client for a particular file.
some times this have problems like D01 above in the path,
is development system. if we move to quality, it will be Q01 etc..
to make every file independent of the server location, we use logical path concept, which is nothing but, instead of giving the total physical path like above,we will give this logical path & file name. before that we will create a logical path in sap & assign some physical path to it.
the below function module is used to get the actual physical path by giving the logical path name & file name
*& Form GET_PHYSICAL_PATH
text This form used to get the Physical Filepath by giving the Logical path & the File name.
FORM GET_PHYSICAL_PATH.
DATA : LV_FILE(132) TYPE C,
V_LENGTH TYPE I ,
LV_LOGNAME LIKE FILEPATH-PATHINTERN.
LV_LOGNAME = P_LPATH.
*--this P_LPATH is a parameter in the selection screen
*--this P_FNAME is the actual file name as below
*--PARAMETERS : P_LPATH TYPE RLGRAP-FILENAME,
P_fname TYPE RLGRAP-FILENAME.
CALL FUNCTION 'FILE_GET_NAME_USING_PATH'
EXPORTING
CLIENT = SY-MANDT
LOGICAL_PATH = LV_LOGNAME
OPERATING_SYSTEM = SY-OPSYS
FILE_NAME = p_fname
IMPORTING
FILE_NAME_WITH_PATH = LV_FILE
EXCEPTIONS
PATH_NOT_FOUND = 1
MISSING_PARAMETER = 2
OPERATING_SYSTEM_NOT_FOUND = 3
FILE_SYSTEM_NOT_FOUND = 4
OTHERS = 5.
IF SY-SUBRC <> 0.
MESSAGE E000 WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ELSE.
*--ur total physical(absolute) path will be in LV_FILE.
V_FILEPATH = LV_FILE.
ENDIF.
ENDFORM. " GET_PHYSICAL_PATH
unique naming for ur file names;
after getting the physical path from the above function module, append date& time stamp to the file as below.
CONCATENATE V_FILEPATH
SY-DATUM
SY-UZEIT
INTO V_FILEPATH.
This way you can make your file name unique always
regards
srikanth
Message was edited by: Srikanth Kidambi -
Slow report caused by 85K record dataset
I'm developing a report that cross validates fields in the Item table, based on criteria-FILTERS. I have 30 such FILTERS.
Item Table is all about Item SKU properties and has Fields like: group, category, colour, price, cost, brand,class, etc
Filters should cross validate the fields that has not been set properly, for example: For category field = ABC , group field <> class field.
The output of each filter expected to be 5-50 records (possible mismatches)
My initial plan was
Use full big single dataset - 85K records x 20 fields from Item table NOT FILTERED
Create 30 Tablixes in the report and APPLY FILTERS on Tablix level
After creating 3 tablixes with filters, my report 's become very slow.
QUESTION: for the better performance is it better to use 1 big dataset, 30 tablixes will reuse this dataset but apply own filters , each filter expected to be 5-50 records (my initial plan).
OR create 30 datasets and each dataset would return 5-50 records ???Hi BrBa,
As per my understanding, I think the second plan that create 30 datasets and each dataset returns 5-50 records can improve the report performance. The reasons are as follows:
There are 85k records in your database, while only about 1000 records are needed. If we use the first plan, we should add a lot of filters on the tablix. This would spend much more time than directly filter the values using SQL server database engine.
Additional, if you need all 85k records, maybe the first pane would be better. Because SQL server database engine can quickly retrieve the data using table scan, it still needs a lot of filters on the tablix. But in the second plan, it needs good enough
quick search method. This would depends on your data.
The following document about Troubleshooting Reports: Report Performance is for your reference:
http://technet.microsoft.com/en-us/library/bb522806(v=sql.105).aspx
If there are any other questions, please feel free to ask.
Regards,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Report Control and Multiple Datasets/Result Sets
I have four results sets, Plan, Forecast, Actual and SPLY per eight different product lines. I want to add conditional formatting of background color if actual is above or below Plan and/or Forecast. The report needs to export properly to Excel.
Is there a way I can use one control rather than multiple text boxes to display the data, allow for conditional formatting and export to Excel.? Thanks!
Environment:
SSRS 2010
SQL Server 2012Hi blairv,
If I understand correctly, you have a dataset in the report which include four fields: Plan, Forecast, Actual and SPLY. Each filed contains eight values.
In SQL Server Reporting Services (SSRS), we can use table, matrix or list display report data in cells. The cells typically contain text data such as text, dates, and numbers but they can also contain gauges, charts, or report items such as images.
Reference: http://technet.microsoft.com/en-us/library/dd220592.aspx
In your case, we can use expression to configure these items background color. Please refer to the following steps:
Click a specific text box in the report. Click BackgroundColor prompt in the Properties dialog box.
Click Expression prompt. And fill with expression below:
=IIF(Fields! Actual.Value>Fields! Plan.Valeu Or Fields! Actual.Value>Fields! Forecast.Valeu,”Red”,”Blue”)
Reference: http://msdn.microsoft.com/en-us/library/ms157328.aspx
If there are any miunderstanding, please feel free to let me know.
Regards,
Alisa Tang
If you have any feedback on our support, please click
here.
Alisa Tang
TechNet Community Support -
Open dataset......
Hi Folks,
I am trying to get some data using Open dataset.The program is not showing any errors and executing fine,but I was not able to find the data that I am going to get using OPEN DATASET.Where will I be able to see the data.In the given path there is no file downloaded.Kindly let me know.
run the program
enter some data in the blank fields of the alv
save it
now double click on the qty1 field in the alv.
Thanks,
K.Kiran.
REPORT zlabel.
TYPE-POOLS:slis,icon.
TABLES:makt.
*Declarations for ALV
DATA:itfieldcat TYPE slis_t_fieldcat_alv WITH HEADER LINE.
DATA:itfieldcat1 TYPE slis_t_fieldcat_alv WITH HEADER LINE.
DATA:itprintparams TYPE slis_print_alv.
DATA:itrepid TYPE sy-repid.
itrepid = sy-repid.
DATA:itevent TYPE slis_t_event.
DATA:itlistheader TYPE slis_t_listheader.
DATA:walistheader LIKE LINE OF itlistheader.
DATA:itlayout TYPE slis_layout_alv.
DATA:top TYPE slis_formname.
DATA:itsort TYPE slis_t_sortinfo_alv WITH HEADER LINE.
DATA : grid TYPE REF TO cl_gui_alv_grid.
*Declaration for DSN
<b>DATA : file(50) VALUE 'E:\userdata\labelfiles'.</b>
DATA : dsn(150).
DATA : dsn1(100).
DATA : n1(4) TYPE n.
*Declarations for Internal tables.
DATA:BEGIN OF imakt OCCURS 0,
matnr LIKE makt-matnr,
spras LIKE makt-spras,
maktx LIKE makt-maktx,
label1(03) TYPE c,
qty1(03) TYPE c,
label2(03) TYPE c,
qty2(03) TYPE c,
END OF imakt.
DATA:ITFINAL LIKE imakt OCCURS 0 WITH HEADER LINE.
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.
SELECT-OPTIONS:matnr FOR makt-matnr OBLIGATORY.
SELECTION-SCREEN END OF BLOCK b1.
START-OF-SELECTION.
PERFORM getdata.
IF sy-subrc = 0.
PERFORM alv.
ELSE.
STOP.
ENDIF.
*& Form getdata
text
FORM getdata.
SELECT matnr
spras
maktx
FROM makt
INTO CORRESPONDING FIELDS OF TABLE imakt
WHERE spras = sy-langu.
ENDFORM. "getdata
*& Form ALV
text
FORM alv.
DEFINE m_fieldcat.
itfieldcat-fieldname = &1.
itfieldcat-col_pos = &2.
itfieldcat-seltext_l = &3.
itfieldcat-do_sum = &4.
itfieldcat-outputlen = &5.
itfieldcat-edit = &6.
append itfieldcat to itfieldcat.
clear itfieldcat.
END-OF-DEFINITION.
m_fieldcat 'MATNR' '' 'MATERIAL No' '' 18 ''.
m_fieldcat 'SPRAS' '' 'Language' '' 02 ''.
m_fieldcat 'MAKTX' '' 'Description' '' 40 ''.
m_fieldcat 'LABEL1' '' 'LABEL1' '' 12 'X'.
m_fieldcat 'QTY1' '' 'QTY1' '' 12 'X'.
m_fieldcat 'LABEL2' '' 'LABEL2' '' 12 'X'.
m_fieldcat 'QTY2' '' 'QTY2' '' 12 'X'.
itlayout-zebra = 'X'.
itlayout-colwidth_optimize = 'X'.
itlayout-no_subtotals = ' '.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
i_callback_program = sy-repid
is_layout = itlayout
i_callback_pf_status_set = 'PF_STATUS'
i_callback_user_command = 'LIST1'
i_callback_top_of_page = 'TOP'
it_fieldcat = itfieldcat[]
i_save = 'X'
is_variant = ITVARIANT
it_events = itevent[]
is_print = itprintparams
it_sort = itsort[]
TABLES
t_outtab = imakt
EXCEPTIONS
program_error = 1
OTHERS = 2.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
CLEAR itfieldcat.
ENDFORM. "ALV
*& Form list1
text
-->R_UCOMM text
-->RS_SELFIELDtext
FORM list1 USING r_ucomm LIKE sy-ucomm
rs_selfield TYPE slis_selfield.
CASE r_ucomm.
WHEN 'EXIT'.
STOP.
ENDCASE.
CLEAR itfieldcat1.
REFRESH itfieldcat1.
DEFINE k_fieldcat.
itfieldcat1-fieldname = &1.
itfieldcat1-col_pos = &2.
itfieldcat1-seltext_l = &3.
itfieldcat1-outputlen = &4.
append itfieldcat1 to itfieldcat1.
clear itfieldcat1.
END-OF-DEFINITION.
k_fieldcat 'MATNR' '' 'MATERIAL No' 18 .
k_fieldcat 'SPRAS' '' 'Language' 02 .
k_fieldcat 'MAKTX' '' 'Description' 40 .
k_fieldcat 'LABEL1' '' 'LABEL1' 12 .
k_fieldcat 'QTY1' '' 'QTY1' 12 .
k_fieldcat 'LABEL2' '' 'LABEL2' 12 .
k_fieldcat 'QTY2' '' 'QTY2' 12 .
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
i_callback_program = sy-repid
is_layout = itlayout
i_callback_pf_status_set = 'PF_STATUS'
i_callback_user_command = 'LIST2'
i_callback_top_of_page = 'TOP'
it_fieldcat = itfieldcat1[]
i_save = 'X'
is_variant = ITVARIANT
it_events = itevent[]
is_print = itprintparams
it_sort = itsort[]
TABLES
t_outtab = imakt
EXCEPTIONS
program_error = 1
OTHERS = 2.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
CLEAR:itfieldcat1,itfieldcat.
ENDFORM. "list1
*& Form list2
text
-->R_UCOMM text
-->RS_SELFIELDtext
FORM list2 USING r_ucomm LIKE sy-ucomm
rs_selfield TYPE slis_selfield.
CASE r_ucomm.
WHEN '&IC1'.
IF rs_selfield-fieldname = 'QTY1'.
LOOP AT IMAKT.
<b>CONCATENATE file n1 '.PJ' INTO dsn.</b>
<b> PERFORM DSN.</b>
CLEAR DSN.
N1 = N1 + 1.
ENDLOOP.
ENDIF.
ENDCASE.
ENDFORM. "list2
*& Form top
text
FORM top.
DATA:title(70) TYPE c.
CALL FUNCTION 'REUSE_ALV_EVENTS_GET'
EXPORTING
i_list_type = 0
IMPORTING
et_events = itevent
EXCEPTIONS
LIST_TYPE_WRONG = 1
OTHERS = 2
IF sy-subrc <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
title = 'LABEL'.
walistheader-typ = 'H'.
walistheader-info = title.
APPEND walistheader TO itlistheader.
CALL FUNCTION 'REUSE_ALV_COMMENTARY_WRITE'
EXPORTING
it_list_commentary = itlistheader
I_LOGO = ''.
I_END_OF_LIST_GRID =
CLEAR itlistheader.
ENDFORM. "TOP
*& Form DSN
text
--> p1 text
<-- p2 text
<b>form DSN .</b>
OPEN DATASET dsn FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
IF sy-subrc <> 0.
<b>LEAVE TO LIST-PROCESSING</b>
<b>WRITE:/ 'FILE COULD NOT BE OPENED'.</b>
EXIT.
ENDIF.
MOVE-CORRESPONDING IMAKT TO ITFINAL.
APPEND ITFINAL.
TRANSFER 'MDA.LBL' TO dsn.
TRANSFER '1' TO dsn.
TRANSFER: ITFINAL-MATNR TO DSN,
ITFINAL-SPRAS TO DSN,
ITFINAL-MAKTX TO DSN,
ITFINAL-LABEL1 TO DSN,
ITFINAL-QTY1 TO DSN,
ITFINAL-LABEL2 TO DSN,
ITFINAL-QTY2 TO DSN.
<b>if sy-subrc <> 0.</b>
write:/ 'Check your code'.
endif.
CLOSE DATASET DSN.
ENDFORM.Hi Kiran,
To my understanding your file is not being created cause you have not passed a location for your file to be put. Refer to my below code you see how its done.
If you need more info let me know.
TYPES DECLARATIONS *
*Data type for Accounting Document Header Table
TYPES: BEGIN OF gt_bkpf,
bukrs TYPE bkpf-bukrs, "Company Code
belnr TYPE bkpf-belnr, "Accounting Document Number
gjahr TYPE bkpf-gjahr, "Fiscal Year
budat TYPE bkpf-budat, "Posting Date
END OF gt_bkpf.
*Data type for Accounting Document Segment Table
TYPES: BEGIN OF gt_bseg,
bukrs TYPE bkpf-bukrs, "Company Code
belnr TYPE bkpf-belnr, "Accounting Document Number
gjahr TYPE bkpf-gjahr, "Fiscal Year
buzei TYPE bseg-buzei , "Line Item
shkzg TYPE bseg-shkzg , "Debit/Credit Indicator
wrbtr TYPE bseg-wrbtr , "Amount in document currency
kostl TYPE bseg-kostl , "Cost Center
aufnr TYPE bseg-aufnr , "Project / Order Number
hkont TYPE bseg-hkont , "General Ledger Account Key
prctr TYPE bseg-prctr , "Profit Center
segment TYPE bseg-segment, "Segment
END OF gt_bseg.
*Data type for Posting Summary Table of given file format
TYPES: BEGIN OF gt_posting_summary,
effective_date(10) TYPE c, "Date of last tuesday
company_code(4) TYPE c, "Company Code
gl_key(6) TYPE c, "General Ledger Account Key
cost_centre(10) TYPE c, "Cost Center
profit_centre(10) TYPE c, "Profit Center
project(12) TYPE c, "Order Number
segment(10) TYPE c, "Segment for Segmental Reporting
amount(16) TYPE c, "Amount with minor denomination & debit/credit indicator
END OF gt_posting_summary.
INTERNAL TABLE DECLARATIONS *
DATA:
*Internal table for Accounting Document Header Table
gi_bkpf TYPE STANDARD TABLE OF gt_bkpf,
*Internal table for Accounting Document Segment Table
gi_bseg TYPE STANDARD TABLE OF gt_bseg,
*Internal table for Posting Summary Table of given file format
gi_posting_summary TYPE STANDARD TABLE OF gt_posting_summary.
RANGES DECLARATIONS *
DATA:
*Building ranges table for last saturday to current date
gr_date TYPE RANGE OF sy-datum.
WORK AREA DECLARATIONS *
DATA:
*Work area for Accounting Document Segment Table
gwa_bseg TYPE gt_bseg,
*Work area for Accounting Document Segment Table
gwa_bkpf TYPE gt_bkpf,
*Work area for Posting Summary Table of given file format
gwa_posting_summary TYPE gt_posting_summary,
*Work area for ranges table for last saturday to current date
gwa_date LIKE LINE OF gr_date.
GLOBAL VARIABLE DECLARATIONS *
DATA: gv_to_date TYPE sy-datum,
gv_from_date TYPE sy-datum,
gv_effective_date(10) TYPE c,
gv_posting_amount(16) TYPE c,
gv_file_name TYPE string,
gv_server_file_name TYPE fileextern,
gv_suspense_accnt TYPE hkont,
gv_amount TYPE wrbtr.
GLOBAL CONSTANT DECLARATIONS *
DATA: gc_x TYPE c VALUE 'X',
gc_s TYPE bseg-shkzg VALUE 'S',
gc_h TYPE bseg-shkzg VALUE 'H',
gc_i TYPE tvarv-sign VALUE 'I',
gc_bt TYPE tvarv-opti VALUE 'BT',
gc_ys TYPE bkpf-blart VALUE 'YS',
gc_zfii0431 TYPE filepath-pathintern VALUE 'ZFII0431',
gc_debit TYPE c VALUE '+',
gc_credit TYPE c VALUE '-',
gc_dot TYPE c VALUE '.',
gc_suspense_key TYPE zglkey VALUE 'SUSPENSE_GL_ACCOUNT'.
Selection Screen *
SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-t01.
SELECTION-SCREEN SKIP 1.
*Accounting Document type for eSFA postings
PARAMETERS : p_ys TYPE bkpf-blart DEFAULT gc_ys.
SELECTION-SCREEN SKIP 1.
*Logical Path of file to be downloaded on Application Server
PARAMETERS : p_l_path TYPE filepath-pathintern DEFAULT gc_zfii0431.
SELECTION-SCREEN END OF BLOCK b1.
SELECTION-SCREEN BEGIN OF BLOCK b2 WITH FRAME TITLE text-t02.
SELECT-OPTIONS : s_date FOR sy-datum .
SELECTION-SCREEN END OF BLOCK b2.
ON LOAD EVENT - Occurs only once, when the program is loaded *
LOAD-OF-PROGRAM.
************************MAIN PROGRAM************************
START-OF-SELECTION - start of database access *
START-OF-SELECTION.
Clear all global variables
PERFORM clear_memory.
Routine for calulating date of Last Saturday and Current Tuesday
PERFORM calculate_posting_dates.
Routine to select posted G/L records from database
PERFORM posting_record_selection.
Routine to compile posting summary table
PERFORM build_posting_summary.
Routine to make filename as eSFA_GL_CandC_YYYYMMDD.txt
PERFORM build_file_name.
Routine for compile physical path of file on Appln. Server
from logical path and desired filename.
PERFORM get_physical_path USING p_l_path
gv_file_name
gc_x
CHANGING gv_server_file_name.
Routine to download file on application server
PERFORM download_on_application_server.
**************************INCLUDES***************************
INCLUDE zfi_get_physical_path.
INCLUDE zfi_file_status_change.
*************************SUBROUTINES*************************
*& Form calculate_posting_dates
Routine for calulating date of Last Saturday and Current Tuesday
FORM calculate_posting_dates .
DATA: lv_monday TYPE sy-datum.
*Get the first day of the week.
CALL FUNCTION 'BWSO_DATE_GET_FIRST_WEEKDAY'
EXPORTING
date_in = sy-datum
IMPORTING
date_out = lv_monday.
*Calculate the to date (saturday)
gv_to_date = lv_monday - 2.
*Calculate the from date (sunday)
gv_from_date = lv_monday - 8.
IF s_date-low IS NOT INITIAL.
gv_from_date = s_date-low.
ENDIF.
IF s_date-high IS NOT INITIAL.
gv_to_date = s_date-high.
ENDIF.
ENDFORM. " calculate_posting_dates
*& Form build_posting_summary
Routine for processing posting summary data table & compile
in given output file table
FORM build_posting_summary .
DATA : lv_kostl TYPE kostl,
lv_amount TYPE wrbtr,
lv_amount_str TYPE wrbtr.
DATA : li_posting_summary TYPE TABLE OF gt_posting_summary.
DATA : lwa_posting_summary TYPE gt_posting_summary.
LOOP AT gi_bseg INTO gwa_bseg.
READ TABLE gi_bkpf INTO gwa_bkpf
WITH KEY bukrs = gwa_bseg-bukrs
belnr = gwa_bseg-belnr
gjahr = gwa_bseg-gjahr.
Routine to calcualte effective date in format YYYY-MM-DD
PERFORM calculate_effective_date.
gwa_posting_summary-effective_date = gv_effective_date.
gwa_posting_summary-company_code = gwa_bseg-bukrs.
gwa_posting_summary-gl_key = gwa_bseg-hkont+4(6).
gwa_posting_summary-cost_centre = gwa_bseg-kostl.
gwa_posting_summary-profit_centre = gwa_bseg-prctr.
gwa_posting_summary-project = gwa_bseg-aufnr.
gwa_posting_summary-segment = gwa_bseg-segment.
Remove the derived fields created in SAP while posting
IF gwa_bseg-kostl IS NOT INITIAL OR
gwa_bseg-aufnr IS NOT INITIAL.
CLEAR: gwa_posting_summary-profit_centre,
gwa_posting_summary-segment.
ENDIF.
IF gwa_bseg-aufnr IS NOT INITIAL.
Substitution for internal order to costcenter for document type 'YS'
SELECT SINGLE cost_centre
INTO lv_kostl
FROM ztfi_sub_costctr
WHERE internal_order = gwa_bseg-aufnr.
IF sy-subrc = 0.
gwa_posting_summary-cost_centre = lv_kostl.
CLEAR gwa_posting_summary-project.
ENDIF.
ENDIF.
IF gwa_bseg-shkzg = gc_h.
gwa_bseg-wrbtr = gwa_bseg-wrbtr * -1.
gwa_posting_summary-amount = gwa_bseg-wrbtr.
ELSE.
gwa_posting_summary-amount = gwa_bseg-wrbtr.
ENDIF.
APPEND gwa_posting_summary TO gi_posting_summary.
CLEAR gwa_posting_summary.
CLEAR gwa_bseg.
ENDLOOP. "LOOP AT gi_bseg INTO gwa_bseg
Sort to find the summary
SORT gi_posting_summary BY company_code
gl_key
cost_centre
profit_centre
project
segment.
*Summarise amount for unique entries
LOOP AT gi_posting_summary INTO gwa_posting_summary.
lv_amount_str = gwa_posting_summary-amount.
lv_amount = lv_amount + lv_amount_str.
AT END OF segment.
gv_amount = lv_amount.
PERFORM calculate_amount.
gwa_posting_summary-amount = gv_posting_amount.
APPEND gwa_posting_summary TO li_posting_summary.
CLEAR lv_amount.
ENDAT.
ENDLOOP.
*Copy the summarised table back to summary table
gi_posting_summary = li_posting_summary.
ENDFORM. " build_posting_summary
*& Form calculate_effective_date
Routine to calcualte effective date in format YYYY-MM-DD
FORM calculate_effective_date .
DATA: lv_date(8) TYPE c,
lv_yyyy(4) TYPE c,
lv_mm(2) TYPE c,
lv_dd(2) TYPE c,
lv_effective_date(10) TYPE c.
DATA: lc_dash TYPE c VALUE '-'.
lv_date = gv_to_date.
lv_yyyy = lv_date+0(4).
lv_mm = lv_date+4(2).
lv_dd = lv_date+6(2).
CONCATENATE lv_yyyy
lc_dash
lv_mm
lc_dash
lv_dd
INTO lv_effective_date.
gv_effective_date = lv_effective_date.
ENDFORM. " calculate_effective_date
*& Form posting_record_selection
Routine to select posted G/L records from database
FORM posting_record_selection .
gwa_date-sign = gc_i.
gwa_date-option = gc_bt.
gwa_date-low = gv_from_date.
gwa_date-high = gv_to_date.
APPEND gwa_date TO gr_date.
*Get the suspese GL accout number from ZTFI_SIXPARTKEY table.
SELECT SINGLE gl_account
FROM ztfi_sixpartkey
INTO gv_suspense_accnt
WHERE sixpartkey = gc_suspense_key .
IF sy-subrc IS INITIAL.
Selection of records (other than suspended acc. no. 999999) posted
between last saturday and current tuesday where document type is 'YS'
SELECT bukrs "Company Code
belnr "Accounting Document Number
gjahr "Fiscal Year
budat "Posting Date
FROM bkpf
INTO TABLE gi_bkpf
WHERE blart EQ p_ys
AND budat IN gr_date. "Change - MV - 16.04.2007 - FCDK902208
AND cpudt IN gr_date.
IF sy-subrc IS NOT INITIAL.
WRITE / 'No records for current posting period, file not created.'(m03).
ELSEIF sy-subrc IS INITIAL.
READ TABLE gi_bkpf INTO gwa_bkpf INDEX 1.
IF sy-subrc = 0.
PERFORM calculate_effective_date.
ENDIF.
Selection of details of all records selected in above table
SELECT bukrs "Company Code
belnr "Document Number
gjahr "fiscal year
buzei "Line Item
shkzg "Debit/Credit Indicator
wrbtr "Amount in document currency
kostl "Cost Center
aufnr "Project / Order Number
hkont "General Ledger Account Key
prctr "Profit Center
segment "Segment
FROM bseg
INTO TABLE gi_bseg
FOR ALL ENTRIES IN gi_bkpf
WHERE bukrs = gi_bkpf-bukrs
AND belnr = gi_bkpf-belnr
AND gjahr = gi_bkpf-gjahr
AND hkont <> gv_suspense_accnt.
IF sy-subrc IS NOT INITIAL.
WRITE / 'No records Found, file not created.'(002).
ENDIF.
ENDIF.
ELSE.
WRITE / 'Suspense GL account is not maintained in sixpart key look up table'(001).
ENDIF.
ENDFORM. " posting_record_selection
*& Form calculate_amount
Routine to concancate amouunt and debit/credit indicator
FORM calculate_amount .
DATA: lv_amount(15) TYPE c,
lv_amount_1(12) TYPE c,
lv_amount_2(2) TYPE c,
lv_debit_credit TYPE c.
IF gv_amount <= 0.
lv_debit_credit = gc_credit.
gv_amount = gv_amount * -1.
ELSE.
lv_debit_credit = gc_debit.
ENDIF.
lv_amount = gv_amount.
SPLIT lv_amount AT gc_dot
INTO lv_amount_1
lv_amount_2.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
input = lv_amount_1
IMPORTING
output = lv_amount_1.
CONCATENATE lv_amount_1
lv_amount_2
INTO lv_amount
SEPARATED BY gc_dot.
CONCATENATE lv_debit_credit lv_amount INTO gv_posting_amount.
ENDFORM. " calculate_amount
*& Form build_file_name
Routine to make filename as eSFA_GL_CandC_YYYYMMDD.txt
FORM build_file_name .
DATA: lv_date(8) TYPE c,
lv_file_name TYPE string.
DATA: lc_file_prefix(14) TYPE c VALUE 'eSFA_GL_CandC_',
lc_file_suffix(4) TYPE c VALUE '.txt'.
lv_date = gv_to_date.
CONCATENATE lc_file_prefix lv_date lc_file_suffix INTO lv_file_name.
gv_file_name = lv_file_name.
ENDFORM. " build_file_name
*& Form download_on_application_server
Routine to download file on application server
FORM download_on_application_server.
DATA : lv_command TYPE string,
lv_lines TYPE i.
lv_command = 'ZFII0431'.
File should be downloaded only if it is not empty
IF gi_bseg IS NOT INITIAL.
Open file for output in text mode
OPEN DATASET gv_server_file_name FOR OUTPUT IN TEXT MODE ENCODING DEFAULT.
IF sy-subrc IS NOT INITIAL.
WRITE / 'File could not be open.'(m01).
EXIT.
ENDIF. "IF sy-subrc IS NOT INITIAL
DESCRIBE TABLE gi_posting_summary LINES lv_lines.
LOOP AT gi_posting_summary INTO gwa_posting_summary.
IF sy-tabix <> lv_lines.
Transfer data to application server
TRANSFER gwa_posting_summary TO gv_server_file_name.
ELSE.
Transfer data to application server with out end of line
TRANSFER gwa_posting_summary TO gv_server_file_name NO END OF LINE.
ENDIF.
ENDLOOP. " LOOP AT gi_posting_summary INTO gwa_posting_summary
Close file for output in text mode
CLOSE DATASET gv_server_file_name.
IF sy-subrc IS NOT INITIAL.
WRITE / 'File could not be close.'(m02).
EXIT.
ELSEIF sy-subrc IS INITIAL.
WRITE : / 'File Name:', gv_file_name .
WRITE : / 'File Downloaded to Application Server successfully.'(m04).
Call the OS command to run Shell script.
PERFORM change_file_status USING lv_command.
ENDIF.
ENDIF.
ENDFORM. " download_on_application_server
*& Form clear_memory
Clear all global variables
FORM clear_memory .
REFRESH: gi_bkpf ,
gi_bseg ,
gi_posting_summary.
REFRESH: gr_date.
CLEAR: gwa_bseg ,
gwa_posting_summary,
gwa_date .
CLEAR: gv_from_date ,
gv_to_date ,
gv_effective_date ,
gv_posting_amount ,
gv_file_name ,
gv_server_file_name.
ENDFORM. " clear_memory
***INCLUDE ZFI_GET_PHYSICAL_PATH .
*& Form get_physical_path
Routine for compile physical path of file on Appln. Server
from logical path and desired filename.
-->P_P_L_PATH text
-->P_GV_FILE_NAME text
-->P_GC_OK text
<--P_GV_SERVER_FILE_NAME text
FORM get_physical_path USING p_p_l_path TYPE any
p_gv_file_name TYPE any
p_gc_ok TYPE any
CHANGING p_gv_server_file_name TYPE any.
CALL FUNCTION 'FILE_GET_NAME_USING_PATH'
EXPORTING
client = sy-mandt
logical_path = p_p_l_path
operating_system = sy-opsys
file_name = p_gv_file_name
eleminate_blanks = p_gc_ok
IMPORTING
file_name_with_path = p_gv_server_file_name
EXCEPTIONS
path_not_found = 1
missing_parameter = 2
operating_system_not_found = 3
file_system_not_found = 4
OTHERS = 5.
IF sy-subrc <> 0.
MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno
WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
ENDIF.
ENDFORM. " get_physical_path -
Report missing some data from dataset
Hi
I populate my report using a dataset and most of the time the report generation is successful.
but after a while (3-5 reports) I start getting missing data in the report. Either the png blobs do not get printed or some other string fields. Usually the same strings are missing.
If I continue generating reports after that I get "report could not be submitted for background processing" error or since I use some fields for formatting I get errors corresponding to those formulas where I try to use the missing fields.
I have checked the dataset before SetDataSource and it is complete, images and all.
Is there some way to restart the embedded crystal reports thread/process programmatically from c#? if the problem is because of memory fragmentation, this might solve the problem.
I use crystal reports basic 2008 for visual studio 2008.
Kind regards
//GunnarIn a memory intensive environment like this, it is possible that we will run into these issues. The thing is - as I understand these tings, Microsoft does not let a 3rd party just go out and use the dataset. E.g.; the Crystal Reports engine is forced to make a copy of the dataset and use that. So, now you've chewed up more memory again. It would be interesting to do four things;
1) monitor the memory and see if there is a particular value at which it is more likely to get the issue
2) take out the report and dataset into another sample app that is not using these x-rays and see if you duplicate the issue there
3) see if CR 2008 (12.x) handles the memory better. As these versions go, there are always improvements. I am not aware of a particular "thing" done to the report engine or database engine in this respect, but it may be worth trying. An eval of CR 2008 can be downloaded from here:
http://www.sap.com/solutions/sapbusinessobjects/sme/freetrials/index.epx
Once you have installed the above, check the version (Help | About). If the version is not 12.3.0.601, download SP 3 from here:
https://smpdl.sap-ag.de/~sapidp/012002523100007123572010E/cr2008_sp3.exe
4) rather than using datasets, would it be possible to write the data into some database temp table and consume the data from there?
Ludek -
Microsoft Visual Basic 2010 Express.
I am new to Visual Basic programing and i am trying to understand the relationships between Datasets, database, table Adaptors. I have to following code that is is giving me the following error" Unable to load, Update requires a valid DeleteCommand
when passed DataRow collection with deleted rows".
I can track the error and its located in "OffsetTableTableAdapter.Update(MaterionOffsetDataSet.OffsetTable)" code. What am i missing?
It seems that i can delete the data on the DataGridView Table and it only displays the correct data. but my database is not updating, even though the data grid displays differently.I can determine this because, when i save the offset database, i have all
the previous uploads and all the rows that i wanted to delete are still there.
My final goal is to be able to import offset data from a CSV file, save this data on the pc, send a copy of this data to a NuermicUpDown so the customer can modify certain numbers. From here they download all the date to a controller. IF the customer
needs to modify the imported data, they can go to a tab with a data grid view and modify the table. They will also have to option to save the modified data into a csv file.
Im not sure if i am making this overcomplicated or if there is a easier way to program this.
CODE:
Private Function LoadOffSetData()
Dim LoadOffsetDialog As New OpenFileDialog 'create a new open file dialog and setup its parameters
LoadOffsetDialog.DefaultExt = "csv"
LoadOffsetDialog.Filter = "csv|*.csv"
LoadOffsetDialog.Title = "Load Offset Data"
LoadOffsetDialog.FileName = "RollCoaterOffset.csv"
If LoadOffsetDialog.ShowDialog() = Windows.Forms.DialogResult.OK Then 'show the dialog and if the result is ok then
Try
Dim myStream As New System.IO.StreamReader(LoadOffsetDialog.OpenFile) 'try to open the file with a stream reader
If (myStream IsNot Nothing) Then 'if the file is valid
For Each oldRow As MaterionOffsetDataSet.OffsetTableRow In MaterionOffsetDataSet.OffsetTable.Rows
oldRow.Delete()
'delete all of the existing rows
Next
'OffsetTableTableAdapter.Update(MaterionOffsetDataSet.OffsetTable)
Dim rowvalue As String
Dim cellvalue(25) As String
'Reading CSV file content
While myStream.Peek() <> -1
Dim NRow As MaterionOffsetDataSet.OffsetTableRow
rowvalue = myStream.ReadLine()
cellvalue = rowvalue.Split(","c) 'check what is ur separator
NRow = MaterionOffsetDataSet.OffsetTable.Rows.Add(cellvalue)
Me.OffsetTableTableAdapter.Update(NRow)
End While
Me.OffsetTableTableAdapter.Update(MaterionOffsetDataSet.OffsetTable)
MainOffset.Value = OffsetTableTableAdapter.MainOffsetValue 'saves all the table offsets
to the offset numericUpDown registers in the main window
StationOffset01.Value = OffsetTableTableAdapter.Station01Value
StationOffset02.Value = OffsetTableTableAdapter.Station02Value
myStream.Close() 'close the stream
Return True
Else 'if we were not able to open the file then
MsgBox("Unable to load, check file name and location") 'let the operator know that the file wasn't able to open
Return False
End If
Catch ex As Exception
MsgBox("Unable to load, " + ex.Message)
Return False
End Try
Else
Return False
End If
End FunctionHello SaulMTZ,
>>I can track the error and its located in "OffsetTableTableAdapter.Update(MaterionOffsetDataSet.OffsetTable)" code. What am i missing?
This error usually shows that you do not initialize the
DeleteCommand object, you could check this
article to see if you get a workaround.
>> Im not sure if i am making this overcomplicated or if there is a easier way to program this.
If you are working CSV file, you could use OleDB to read it which would treat the CSV file as a Table:
http://www.codeproject.com/Articles/27802/Using-OleDb-to-Import-Text-Files-tab-CSV-custom
which seems to be easier (in my opinion).
Regards.
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Hi all,
As we can get data from .xls, .xlsx file in a dataset. Now i want to read all the entire data from .xltm file.
I also want to get all sheets name from .xltm file. I am using .Net framework 4.5, Visual studio 2012.
Please help me. I am unable to extract it. It is urgent.Hello Kalpna Bindal,
I just investigate this issue and use the following way but it does work:
private void Button_Click(object sender, RoutedEventArgs e)
// initialize connection
OleDbConnection con = new OleDbConnection();
con.ConnectionString =
"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=D:\\Fruits1.xltm;Extended Properties=\"Excel 12.0 Xml;HDR=YES;\"";
//pass query to Insert values into Excel
OleDbCommand cmd = new OleDbCommand();
cmd.CommandText = "insert into [Sheet1$] values('')";
cmd.Connection = con;
// EXECUTE QUERY
con.Open();
cmd.ExecuteNonQuery();
con.Close();
I then did a research about this issue and find the following case:
http://stackoverflow.com/questions/22225017/reading-excel-xltm-files-programmatically
To test whether it is possible I use the following code in WPF:
Microsoft.Office.Interop.Excel.Application xltmApp = new Microsoft.Office.Interop.Excel.Application();
xltmApp.Visible = false;
xltmApp.ScreenUpdating = false;
Workbook xltmBook = xltmApp.Workbooks.Open(@"D:\Fruits1.xltm");
Worksheet xlworksheet = xltmBook.Worksheets.get_Item(1);
for (int i = 1; i <= 1; i++)
for(int j=1;j<=1;j++)
string str = (string)(xlworksheet.UsedRange.Cells[i, j] as Microsoft.Office.Interop.Excel.Range).Value;
MessageBox.Show(str);
The result is that I can read this .xltm file and read the firstline without problem.
I would think you can also use this way to work with your .xltm file and output it to your application.
For more details about how to fill to your WPF, common wpf and excel thread will be helpful:
http://www.codearsenal.net/2012/06/c-sharp-read-excel-and-show-in-wpf.html#.URPyfPJJocU
http://www.dotnetcurry.com/showarticle.aspx?ID=988
http://mozouna.com/Blog/2013/03/28/wpf-reading-and-writing-to-excel-file-with-c/
By the way, for WPF case please post on WPF forum instead of C# forum.Best regards,
Barry
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey. -
Filtering SSRS dataset on parameter INT
Hi.
I have a simple report which has a Week parameter.
Week is a multi select parameter populated from a dataset (values 1-52).
I have now added a filter to the main dataset:
Expression: [Week]
Operator: In
Value: Split(Fields!Week.Value, ",")
I get an error as soon as I try running the report with this filter:
An error has occurred during report processing. (rsProcessingAborted)
The processing of FilterExpression for the dataset ‘ProjectTimesheetDataSet’ cannot be performed. Cannot compare data of types System.Int32 and System.String. Please check the data type returned by the FilterExpression. (rsComparisonTypeError)
I have tried wrapping the the expression and Value with Cint, no joy.
If I write out the value of Week to the page using the Join funtoin I get resutls i.e. 48,49,50
Where am I going wrong??Hi CDG100,
Based on my research, I think this issue is caused by the data type mismatch between Week field and parameter WeekNumber. If I understand correctly, the Week field is Integer data type in your dataset, while the parameter WeekNumber is Text (String) data
type. In your scenario, even we change the field to Integer (from Text), it will also turn back to Text. Because the filter always change the data type of Expression to match the Value data type. That’s the reason why the error message that “Cannot compare
data of types System.Int32 and System.String” appears.
So in order to fix the issue, please try to use an Integer data type field as the parameter values by using Get values from a query. Then the Week field and parameter WeekNumber can be compared.
If there are any misunderstanding, please elaborate the issue for further investigation.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Datatype : 25 not supported error while saving Logical Query in BIP Dataset
Hi All,
I have created a nested logical Query in OBIEE Publisher Datamodel using parameters.
Here is the simple query :
Select name, revenue, date from
select "SubjectAreaName"."TableName"."Name" name,
case when :Date > date '2012-01-01' then "SubjectAreaName"."TableName"."Revenue" else NULL end Revenue,
date date
from
"SubjectAreaName"
where
date = :Date
) T1
Note - :Date is the parameter. When I try saving this dataset as-is, it throws me Datatype : 25 not supported error. But, when i replace the parameter :Date with actual date, then the query works. Also, the inner query works with or without parameters. The issue arises when I use the outer query and the parameters.
Please let me know if you have come across such a problem ?
Any help on this regard would be highly appreciated.
Thanks
Swarnaya i can understand we wont do the sql logic at the data model part as some time performance issue may come when comparaed to sql run in package and sql running in data model . so i have mostly done using package and i will just do select * from table in the data set of data model so i didn't face any issue's like this .
once check this tutorial may be you already
i use to create parameters like this i didn't face any issue
SELECT * from TEST where ID=:p_id
http://st-curriculum.oracle.com/obe/fmw/bi/bip/bip11g/gettingstarted/gettingstarted.htm -
How to polulate data from lookup using request dataset in OIM 11g
Hi,
Using Request dataset in OIM 11g, I need to display one dropdown with the roles those need to come from Lookup.
For Ex; I have 2 resources,i.e Resource A and Resource B. Resource A has 5 roles and Resource B has 3 Roles.
While creating a request, If I select Resource A, then I should be able to get 5 Roles and if I select Resource B then I should be able to see corresponding 3 roles.
Pls. note I have only one Look up definition , where I have roles for both Resource A and B.
I have done simillar thing in OIM 10g , however I am unable to do it using OIM 11g Request dataset.
Pls suggest.Hi BB,
I am trying to follow up your response.
You are suggestng to use prepopulate adapter for to populate respource object name, that means We have to just use an sql query from obj tabke to get the resource object name. right ?? it could be like below, what should I have entity-type value here ??
<AttributeReference name="Field1" attr-ref="act_key"
available-in-bulk="false" type="Long" length="20" widget="ENTITY" required="true"
entity-type="????"/>
<PrePopulationAdapter name="prepopulateResurceObject"
classname="my.sample.package.prepopulateResurceObject" />
</AttributeReference>
<AttributeReference name="Field2" attr-ref="Field2" type="String" length="256" widget="lookup-query"
available-in-bulk="true" required="true">
<lookupQuery lookup-query="select lkv_encoded as Value,lkv_decoded as Description from lkv lkv,lku lku
where lkv.lku_key=lku.lku_key and lku_type_string_key='Lookup.xxx.BO.Field2'
and instr(lkv_encoded,concat('$Form data.Field1', '~'))>0" display-field="Description" save-field="Value" />
</AttributeReference>
Then I need think about the 'Lookup.xxx.BO.Field2' format.
Could you please let me know if my understanding is correct?? What is the entity-type value of the first attribute reference value?
Thanks for your all help.
Maybe you are looking for
-
i tried for update but when i saw the screen its showing an arrow with a USB cable towards itunes icon, and now i'm not able to do anything, i also tried to connect ipad through my laptop but no use
-
Adobe Photoshop CC SDK and PlugPlug API
I've downloaded the latest Adobe Photoshop CC SDK hoping I'll learn something about the PlugPlug API plug-in technique (c++ along with its Flash interface) but what a surprise! - the same old documentation included in the latest CC SDK package So I w
-
Understanding the difference of tcpdump
Hey everyone! I just portscanned my server and also used a tcpdump which I grepped for the IP of my server to limit every results to information connected with my server. I basically am receiving two different answers: 13:07:43.733576 IP myIP-address
-
Installation help still needed Bundle 13
I have spent the better part of 2 weeks downloading & getting nowhere. I've spent 10/21, 10/22 & 10/27 trying to get Chat help to no avail. I have at least 1 case number # 0186024139 but no one to give it to. 10/19 I first did the download with Akam
-
I dont use the credit card i have in my account any more and I dont want to put in a new one, how can i delete that information and just download the free apps?