A data structure to represent a table in ram ?
So I've implemented a shoppingCart using a table in the database and it's fine.
BUT, I'm thinking the table approach isn't that efficient - for example clicking Add to Cart does two things:
1) insert the row into the shoppingCart table
2) display the whole shopping cart for that session
I'm thinking, wouldn't it be nice if I could just store the shoppingCart in a data structure ? I'm sure Coldfusion supports such a strucuture with addRow , removeRow commands - I just can't think of it.
Do you think this is an efficient approach ?
a data structure to represent a table in ram ?
Yes. In fact, you could literally use a structure to represent the shopping-cart in memory. It would begin like this:
<cfset session.shoppingCart = structnew()>
I'm sure Coldfusion supports such a strucuture with addRow , removeRow commands - I just can't think of it.
You would then be thinking of structInsert() and structDelete.
Do you think this is an efficient approach ?
Yes. However, I have a different suggestion. What you're doing is a learning exercise. That can only be a good thing.
If you're serious about setting up shop, which I assume you are, then you should invest in shopping-cart software. They cost little, relative to the benefit you could derive from them. Mary Jo's CFWebstore seems good value for money.
Similar Messages
-
Risks of changing the field length of data structure of a Cluster Table
Hello,
We are on ECC 604 and had implemented HR & Travel Management. Reporting on these applications is done in BI. We use ESS and & Mobile Travel for time, travel expenses, etc and use PCLn clusters.
There is business need to change the length of a field from 20 to 40 for the data structure PTK** of cluster table PCL1.
We are exploring various options to avoid core modifications.
We are also assessing the risks associated with changing the field length.
I am asking you for your opinion about the risks associated with changing the field length of a data structure of cluster table.
Thanks & Regards,
Manoj K PingaliRecently, we came across the same situation where we had to change the length of a field. Let me explain you what precautions we had taken during that time.
1. Apply the where use list of that table/ Field and check whether it has been used in some program and FM or not. If yes then check one more thing that check the TYPE of another variables on which system has populating data (move, write or in FM parameters). if you will not consider this then you can land you in big trouble. (Conversion dump)
2. Ask the basis to take a dump of the production, quality and data for the safer side if something does not go right.
Now, you can do the changes in your development system and then adjust the database and see the impact of it.
Hopefully, you will not come across any difficult situation in this changes.
Thanks.
Anurag -
Data structure similar to database table
i have the need for a data structure that will essentially be the same as a database table. this "table" can be queried against and sorted.
currently, i am using an interface to abstract this "table" datastructure, and implementing the interface via jdbc calls.
may someone point me in the right direction?
in .NET, they have the DataTable and DataSet data structures, which do exactly as i require. These are objects essentially correspond to a database table and database, respectively. They are used as in-memory database.
i have already tried using HSQL, but was wondering if there was a "better" way (i.e. more light-weight, intuitive, easier to use).
thanks.Use an array and put your records in it.
Efficient querying (simple queries) requires you to build indices for each element you wish to query on. The type of index depends on the kind on search you want to perform. Exact matches can use Hashes. Sorting, prefix or postfix matching can be performed using various tree indices.
Complex queries (i.e. those that query on multiple parameters generally require query optimisers to figure out how to use the indies you have in the most efficient manner possible (well as close as the optimizer can get to it).
cheers
matfud -
Hierarchical data structures (in a single table)
Hi,
If I have a hierarchy of objects stored in a table -
ORG_UNIT
ID
PARENT_ID
NAME
And the JDO mapping for an OrgUnit contains a parent OrgUnit and a
Collection of children.
Is there an efficient way of pulling them out of the database.
It is currently loading each individual parent's kids.
This is going to be pretty slow if there are say 500 OrgUnits in the
database.
If it would be better to pull them all out and build the hierarchy up in
code (as it was being done in straight JDBC). How can I efficiently obtain
the parent or children without doing exactly the same?
Thanks,
SimonSimon,
There will be no db access for every child - you will read all child records
for a particular parent at once when you try to access its child collection.
Granted that for terminal leaves you will get an db access to load an empty
collection so effectively you will get a db access per node. If your goal is
always to load and traverse entire tree it will be expensive. But the
beauty of hierarchical structures is that while they could be huge millions
of nodes you do not need to load it all to navigate - just the path you
need. This is where lazy loading excels so overall on large trees you will
be much better of not loading whole thing at once. However if you still want
to do it nothing prevents you from not having persistent collection of child
records in OrgUnit class at all but only a reference to a parent, load
entire table using query and then build tree in memory yourself as you
iterate over the query resultset. You can probably even do it in a single
iteration over the resultset. I would never do it myself though . In my
opinion it defeats ease of use and cleanness of your object model.
Alex
"Simon Horne" <[email protected]> wrote in message
news:ag1p9p$9si$[email protected]..
Hi,
If I have a hierarchy of objects stored in a table -
ORG_UNIT
ID
PARENT_ID
NAME
And the JDO mapping for an OrgUnit contains a parent OrgUnit and a
Collection of children.
Is there an efficient way of pulling them out of the database.
It is currently loading each individual parent's kids.
This is going to be pretty slow if there are say 500 OrgUnits in the
database.
If it would be better to pull them all out and build the hierarchy up in
code (as it was being done in straight JDBC). How can I efficiently obtain
the parent or children without doing exactly the same?
Thanks,
Simon -
I need to various data structure table for MRS
A possible solution is create a materialized view and then a check constraint on the materialized view: this should work in a multiuser environment.
Example:
SQL> drop table t;
Table supprimee.
SQL>
SQL> create table t (
2 x integer,
3 y varchar2(10)
4 );
Table creee.
SQL>
SQL> CREATE MATERIALIZED VIEW LOG on t
2 WITH ROWID (x, y)
3 including new values;
Journal de vue materialisee cree.
SQL>
SQL> CREATE MATERIALIZED VIEW t_mv
2 REFRESH FAST ON COMMIT AS
3 SELECT count(*) cnt
4 FROM t;
Vue materialisee creee.
SQL>
SQL> ALTER TABLE t_mv
2 ADD CONSTRAINT chk check(cnt<=1);
Table modifiee.
SQL>
SQL> insert into t values(1,'Ok');
1 ligne creee.
SQL> commit;
Validation effectuee.
SQL>
SQL> insert into t values(2,'KO');
1 ligne creee.
SQL> commit;
commit
ERREUR a la ligne 1 :
ORA-12008: erreur dans le chemin de regeneration de la vue materialisee
ORA-02290: violation de contraintes (TEST.CHK) de verification -
I'm trying to develop an application where I require 2 or more store equivalent objects. Each object has an id (int) used as part of the hashCode() and equals() methods. I would like to store all objects in a data structure that would allow to store and easily retrieved the concepts based on either id. I have thought about using TreeMap or Hashtable but those only allow the retrieval of element using the key.
In description logic, we can express statements, such as A := C, meaning that A and C are equivalent (i.e. the same object). I would like to keep track of these objects to reduce the number of iteration carried by the algorithm. If I store these elements in a java.util.Hashtable<K,V> then I'm only allowed to retrieve objects using the key (i.e. Hashtable.get(Object key)). However, I need to be able to retrieve elements either by keys or values depending on the task. For example, if I find C in another statement I would like to replace it by A and sometimes I need to use A to retrieve C.
-
Simple Transformation to deserialize an XML file into ABAP data structures?
I'm attempting to write my first simple transformation to deserialize
an XML file into ABAP data structures and I have a few questions.
My simple transformation contains code like the following
<tt:transform xmlns:tt="http://www.sap.com/transformation-templates"
xmlns:pp="http://www.sap.com/abapxml/types/defined" >
<tt:type name="REPORT" line-type="?">
<tt:node name="COMPANY_ID" type="C" length="10" />
<tt:node name="JOB_ID" type="C" length="20" />
<tt:node name="TYPE_CSV" type="C" length="1" />
<tt:node name="TYPE_XLS" type="C" length="1" />
<tt:node name="TYPE_PDF" type="C" length="1" />
<tt:node name="IS_NEW" type="C" length="1" />
</tt:type>
<tt:root name="ROOT2" type="pp:REPORT" />
<QueryResponse>
<tt:loop ref="ROOT2" name="line">
<QueryResponseRow>
<CompanyID>
<tt:value ref="$line.COMPANY_ID" />
</CompanyID>
<JobID>
<tt:value ref="$line.JOB_ID" />
</JobID>
<ExportTypes>
<tt:loop>
<ExportType>
I don't know what to do here (see item 3, below)
</ExportType>
</tt:loop>
</ExportTypes>
<IsNew>
<tt:value ref="$line.IS_NEW"
map="val(' ') = xml('false'), val('X') = xml('true')" />
</IsNew>
</QueryResponseRow>
</tt:loop>
</QueryResponse>
</tt:loop>
1. In a DTD, an element can be designated as occurring zero or one
time, zero or more times, or one or more times. How do I write the
simple transformation to accommodate these possibilities?
2. In trying to accommodate the "zero or more times" case, I am trying
to use the <tt:loop> instruction. It occurs several layers deep in the
XML hierarchy, but at the top level of the ABAP table. The internal
table has a structure defined in the ABAP program, not in the data
dictionary. In the simple transformation, I used <tt:type> and
<tt:node> to define the structure of the internal table and then
tried to use <tt:loop ref="ROOT2" name="line"> around the subtree that
can occur zero or more times. But every variation I try seems to get
different errors. Can anyone supply a working example of this?
3. Among the fields in the internal table, I've defined three
one-character fields named TYPE_CSV, TYPE_XLS, and TYPE_PDF. In the
XML file, I expect zero to three elements of the form
<ExportType exporttype='csv' />
<ExportType exporttype='xls' />
<ExportType exporttype='pdf' />
I want to set field TYPE_CSV = 'X' if I find an ExportType element
with its exporttype attribute set to 'csv'. I want to set field
TYPE_XLS = 'X' if I find an ExportType element with its exporttype
attribute set to 'xls'. I want to set field TYPE_PDF = 'X' if I find
an ExportType element with its exporttype attribute set to 'pdf'. How
can I do that?
4. For an element that has a value like
<ErrorCode>123</ErrorCode>
in the simple transformation, the sequence
<ErrorCode> <tt:value ref="ROOT1.CODE" /> </ErrorCode>
seems to work just fine.
I have other situations where the XML reads
<IsNew value='true' />
I wanted to write
<IsNew>
<tt:value ref="$line.IS_NEW"
map="val(' ') = xml('false'), val('X') = xml('true')" />
</IsNew>
but I'm afraid that the <tt:value> fails to deal with the fact that in
the XML file the value is being passed as the value of an attribute
(named "value"), rather than the value of the element itself. How do
you handle this?Try this code below:
data l_xml_table2 type table of xml_line with header line.
W_filename - This is a Path.
if w_filename(02) = '
open dataset w_filename for output in binary mode.
if sy-subrc = 0.
l_xml_table2[] = l_xml_table[].
loop at l_xml_table2.
transfer l_xml_table2 to w_filename.
endloop.
endif.
close dataset w_filename.
else.
call method cl_gui_frontend_services=>gui_download
exporting
bin_filesize = l_xml_size
filename = w_filename
filetype = 'BIN'
changing
data_tab = l_xml_table
exceptions
others = 24.
if sy-subrc <> 0.
message id sy-msgid type sy-msgty number sy-msgno
with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
endif. -
Hi,
I need to export data from excel to ADF table and edit the data and finally send it back to database.
Would you kindly let me know how to proceed in this?
Many thanks in advance
RegardsThis depends on your excel file structure.
You can use ADF Desktop Integration or
something like this(for simple excel files):
for excel -> adf table
http://technology.amis.nl/2010/09/16/adf-11g-import-from-excel-into-an-adf-table/
For adf table -> excel:
http://www.baigzeeshan.com/2010/04/exporting-table-to-excel-in-oracle-adf.html
http://adfreusablecode.blogspot.com/2012/07/export-to-excel-with-styles.html
Dario -
How to get multipe data set values from Decision Table.
Hi All,
I need to use SAP BRM for one of the scenario where based one some Code Group I need to get a set of questions and for each question a set of possible answers.
The structure of Decision Table will be like below :
Table 1 : To get set of questions based on Project Code
Input Output Output
Project Code
Question Id
Question Description
Table 2 : To get set of answers based on question
Input Output Output
Question ID
Answer Id
Answer Description
I already searched in forum to get the multiple values based on some input and that works fine for a single field with multiple outcome.
Handling Selective Multiple Actions in SAP Business Rules Management System
In my scenario I need to get a set of Id and description as multiple outcome.
Can anyone please let me know how this can be achieved in BRM.
Thanks in advance
RavindraCreate an XSD in the BRM project with the desired data structure and import the XSD alias in the 'Project Resources'. Add this XSD alias as input/output of the decision table.
Refer this:
Creating a Simple BRM Ruleset in 30 Easy Steps using NWDS 7.3 (Flow Ruleset with a Decision Table) -
Storage location Data is not saving in table MARD using BAPI method.
Hi Experts,
TABLES: T001L, "Storage Locations
MARA, "General Material Data
MAKT, "Material Descriptions
MBEW, "Material Valuation
MARC, "Plant Data for Material
MARD. "Storage Location Data for Mate
DATA: BAPI_HEAD LIKE BAPIMATHEAD, "MATERIAL
BAPI_MAKT LIKE BAPI_MAKT, "Material Description
BAPI_MARA1 LIKE BAPI_MARA, "Client Data
BAPI_MARAX LIKE BAPI_MARAX,
BAPI_MARC1 LIKE BAPI_MARC, "Plant View
BAPI_MARCX LIKE BAPI_MARCX,
BAPI_MBEW1 LIKE BAPI_MBEW, "Accounting View
BAPI_MBEWX LIKE BAPI_MBEWX,
BAPI_MARD1 LIKE BAPI_MARD, "Storage location
BAPI_MARDX LIKE BAPI_MARDX,
BAPI_RETURN LIKE BAPIRET2.
DATA: BEGIN OF INT_MAKT OCCURS 100.
INCLUDE STRUCTURE BAPI_MAKT.
DATA: END OF INT_MAKT.
DATA: BEGIN OF INT_MAT OCCURS 100,
WERKS(4), "Plant
LGORT(4), "Storage location
MTART(4), "Material type
MATNR(18), "Material number
MAKTX(40), "Material description
MATKL(9) , "Material group
MBRSH(1), "Industry sector
MEINS(3), "Base unit of measure
GEWEI(3), "Weight Unit
SPART(2), "Division
EKGRP(3), "Purchasing group
VPRSV(1), "Price control indicator
STPRS(12), "Standard price
PEINH(3), "Price unit
SPRAS(2), "Language key
BKLAS(4), "VALUATION CLASS
VERPR TYPE VERPR_BAPI, "MOVING PRICE
BWTTY(1), "Valuation Catogory
MLAST(1), "Price determination
MLMAA(1), "Material Ledger
EKLAS(4), "Valuation Class for sales order stock
QKLAS(4), "Valuation Class for Project Stock
ZKPRS TYPE DZKPRS, "Future price
ZKDAT TYPE DZKDAT, "Valid From Date
BWPRS TYPE BWPRS, "Tax price 1
BWPS1 TYPE BWPS1, "Tax price 2
VJBWS TYPE VJBWS, "Tax price 3
ABWKZ TYPE ABWKZ, "Devaluatin indicator
BWPRH TYPE BWPRH, "Commercial price 1
BWPH1 TYPE BWPH1, "Commercial price 2
VJBWH TYPE VJBWH, "Commercial Price 3
XLIFO(1), "LIFO/FIFO revelant
MYPOL(4), "Pool no for LIFO
MMSTA(2), "Plant specific material status
AUSME TYPE AUSME, "Unit of issue
QMATA(6), "Material Authorization group
RBNRM(9), "Catalog Profile
WEBAZ TYPE WEBAZ, "Goods reciept processing time in days
PRFRQ TYPE PRFRQ, "Recurring Insepction
SSQSS(8), "QM Control key
QZGTP(4), "Certificate Type
QSSYS(4), "Required QM system for vendor
END OF INT_MAT.
DATA: V_MATNR TYPE MARA-MATNR.
SELECT-OPTIONS:
PLANT FOR MARC-WERKS OBLIGATORY MEMORY ID PLT,
S_LGORT FOR MARD-LGORT MEMORY ID STL,
MATERIAL FOR MARA-MATNR MEMORY ID MAT,
MATLTYPE FOR MARA-MTART MEMORY ID MTY,
DIVISION FOR MARA-SPART MEMORY ID DIV.
PARAMETERS: F_FILE LIKE RLGRAP-FILENAME
DEFAULT 'C:\DATA\ZMATERIAL.XLS' MEMORY ID F_FILE,
GETDATA AS CHECKBOX, "Tick to download materials data to local harddisk
UPDDATA AS CHECKBOX. "Tick to update date to Materials Master
IF GETDATA = 'X'.
PERFORM DOWNLOAD_DATA.
PERFORM DOWNLOAD_FILE.
ENDIF.
IF UPDDATA = 'X'.
PERFORM UPLOAD_FILE.
PERFORM UPDATE_MM.
ENDIF.
FORM DOWNLOAD_DATA.
SELECT * FROM MARC WHERE LVORM EQ ' '
AND WERKS IN PLANT
AND MATNR IN MATERIAL.
CLEAR MARA.
SELECT SINGLE * FROM MARA WHERE MATNR = MARC-MATNR.
CHECK MATLTYPE.
CHECK DIVISION.
CLEAR MBEW.
SELECT SINGLE * FROM MBEW WHERE MATNR = MARC-MATNR
AND BWKEY = MARC-WERKS.
CLEAR MAKT.
SELECT SINGLE * FROM MAKT WHERE SPRAS = 'EN'
AND MATNR = MARC-MATNR.
CLEAR MARD.
SELECT SINGLE * FROM MARD WHERE WERKS IN PLANT
AND LGORT IN S_LGORT.
WRITE:/ MARC-WERKS, "Plant
MARD-LGORT, "Storage location
MARA-MTART, "Material type
MARA-MATNR, "Material number
MARA-MATKL, "Material group
MARA-MBRSH, "Industry sector
MARA-MEINS, "Base unit of measure
MARA-GEWEI, "Weight Unit
MARA-SPART, "Division
MARC-EKGRP, "Purchasing group
MBEW-VPRSV, "Price control indicator
MBEW-STPRS, "Standard price
MBEW-PEINH, "Price unit
MBEW-BKLAS, "VALUE CLASS
MAKT-SPRAS, "Language key
MBEW-BKLAS, "Valuation Class
MBEW-VERPR, "Moving price
MAKT-MAKTX, "Material description
MBEW-BWTTY, "Valutaion Catogorey
MBEW-MLAST, "Price Determination
MBEW-MLMAA, "Material Ledger
MBEW-EKLAS, "Valuation class for Sales order stock
MBEW-QKLAS, "Valutaion Class for Project Stock
MBEW-ZKPRS, "Future Price
MBEW-ZKDAT, "Valid From Date
MBEW-BWPRS, "Tax price 1
MBEW-BWPS1, "Tax price 2
MBEW-VJBWS, "Tax price 3
MBEW-ABWKZ, "Devaluatin indicator
MBEW-BWPRH, "Commercial price 1
MBEW-BWPH1, "Commercial price 2
MBEW-VJBWH, "Commercial Price 3
MBEW-XLIFO, "LIFO/FIFO revelant
MBEW-MYPOL, "Pool no for LIFO
MARC-MMSTA, "Plant specific material status
MARC-AUSME, "Unit of issue
MARC-QMATA, "Material Authorization group
MARA-RBNRM, "Catalog Profile
MARC-WEBAZ, "Goods reciept processing time in days
MARC-PRFRQ, "Recurring Insepction
MARC-SSQSS, "QM Control key
MARC-QZGTP, "Certificate Type
MARC-QSSYS. "Required QM system for vendor
INT_MAT-WERKS = MARC-WERKS. "Plant
INT_MAT-LGORT = MARD-LGORT. "Storage Location
INT_MAT-MTART = MARA-MTART. "Material type
INT_MAT-MATNR = MARA-MATNR. "Material number
INT_MAT-MAKTX = MAKT-MAKTX. "Material description
INT_MAT-MATKL = MARA-MATKL. "Material group
INT_MAT-MBRSH = MARA-MBRSH. "Industry sector
INT_MAT-MEINS = MARA-MEINS. "Base unit of measure
INT_MAT-GEWEI = MARA-GEWEI. "Weight Unit
INT_MAT-SPART = MARA-SPART. "Division
INT_MAT-EKGRP = MARC-EKGRP. "Purchasing group
INT_MAT-VPRSV = MBEW-VPRSV. "Price control indicator
INT_MAT-STPRS = MBEW-STPRS. "Standard price
INT_MAT-PEINH = MBEW-PEINH. "Price unit
INT_MAT-SPRAS = MAKT-SPRAS. "Language key
INT_MAT-BKLAS = MBEW-BKLAS. "VALVATION CLASS
INT_MAT-VERPR = MBEW-VERPR. "MOVING price
INT_MAT-BWTTY = MBEW-BWTTY. "Valutaion Catogorey
INT_MAT-MLAST = MBEW-MLAST. "Price Determination
INT_MAT-MLMAA = MBEW-MLMAA. "Material Ledger
INT_MAT-EKLAS = MBEW-EKLAS. "Valuation class forS.O Stock
INT_MAT-QKLAS = MBEW-QKLAS. "Valutaion Class for Project
INT_MAT-ZKPRS = MBEW-ZKPRS. "Future Price
INT_MAT-ZKDAT = MBEW-ZKDAT. "Valid From Date
INT_MAT-BWPRS = MBEW-BWPRS. "Tax price 1
INT_MAT-BWPS1 = MBEW-BWPS1. "Tax price 2
INT_MAT-VJBWS = MBEW-VJBWS. "Tax price 3
INT_MAT-ABWKZ = MBEW-ABWKZ. "Devaluatin indicator
INT_MAT-BWPRH = MBEW-BWPRH. "Commercial price 1
INT_MAT-BWPH1 = MBEW-BWPH1. "Commercial price 2
INT_MAT-VJBWH = MBEW-VJBWH. "Commercial Price 3
INT_MAT-XLIFO = MBEW-XLIFO. "LIFO/FIFO revelant
INT_MAT-MYPOL = MBEW-MYPOL. "Pool no for LIFO
INT_MAT-MMSTA = MARC-MMSTA. "Plant specific material
INT_MAT-AUSME = MARC-AUSME. "Unit of issue
INT_MAT-QMATA = MARC-QMATA. "Material Authorization group
INT_MAT-RBNRM = MARA-RBNRM. "Catalog Profile
INT_MAT-WEBAZ = MARC-WEBAZ. "Goods reciept processing
INT_MAT-PRFRQ = MARC-PRFRQ. "Recurring Insepction
INT_MAT-SSQSS = MARC-SSQSS. "QM Control key
INT_MAT-QZGTP = MARC-QZGTP. "Certificate Type
INT_MAT-QSSYS = MARC-QSSYS. "Required QM system for
APPEND INT_MAT.
CLEAR INT_MAT.
ENDSELECT.
ENDFORM.
FORM DOWNLOAD_FILE.
call function 'WS_DOWNLOAD'
EXPORTING
FILENAME = F_FILE
FILETYPE = 'DAT'
FILETYPE = 'WK1'
tables
data_tab = INT_MAT
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_WRITE_ERROR = 2
INVALID_FILESIZE = 3
INVALID_TYPE = 4
NO_BATCH = 5
UNKNOWN_ERROR = 6
INVALID_TABLE_WIDTH = 7
GUI_REFUSE_FILETRANSFER = 8
CUSTOMER_ERROR = 9
OTHERS = 10.
IF SY-SUBRC = 0.
FORMAT COLOR COL_GROUP.
WRITE:/ 'Data Download Successfully to your local harddisk'.
SKIP.
ENDIF.
ENDFORM.
FORM UPLOAD_FILE.
call function 'WS_UPLOAD'
EXPORTING
FILENAME = F_FILE
FILETYPE = 'DAT'
FILETYPE = 'WK1'
tables
data_tab = INT_MAT
EXCEPTIONS
FILE_OPEN_ERROR = 1
FILE_WRITE_ERROR = 2
INVALID_FILESIZE = 3
INVALID_TYPE = 4
NO_BATCH = 5
UNKNOWN_ERROR = 6
INVALID_TABLE_WIDTH = 7
GUI_REFUSE_FILETRANSFER = 8
CUSTOMER_ERROR = 9
OTHERS = 10.
IF SY-SUBRC = 0.
FORMAT COLOR COL_GROUP.
WRITE:/ 'Data Upload Successfully from your local harddisk'.
SKIP.
ENDIF.
ENDFORM.
FORM UPDATE_MM.
LOOP AT INT_MAT.
CALL FUNCTION 'CONVERSION_EXIT_MATN1_INPUT'
EXPORTING
INPUT = INT_MAT-MATNR
IMPORTING
OUTPUT = INT_MAT-MATNR
EXCEPTIONS
LENGTH_ERROR = 1
OTHERS = 2
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
Header
BAPI_HEAD-MATERIAL = INT_MAT-MATNR.
BAPI_HEAD-IND_SECTOR = INT_MAT-MBRSH.
BAPI_HEAD-MATL_TYPE = INT_MAT-MTART.
BAPI_HEAD-BASIC_VIEW = 'X'.
BAPI_HEAD-PURCHASE_VIEW = 'X'.
BAPI_HEAD-ACCOUNT_VIEW = 'X'.
Material Description
REFRESH INT_MAKT.
INT_MAKT-LANGU = INT_MAT-SPRAS.
INT_MAKT-MATL_DESC = INT_MAT-MAKTX.
APPEND INT_MAKT.
Client Data - Basic
BAPI_MARA1-MATL_GROUP = INT_MAT-MATKL.
BAPI_MARA1-BASE_UOM = INT_MAT-MEINS.
BAPI_MARA1-UNIT_OF_WT = INT_MAT-GEWEI.
BAPI_MARA1-DIVISION = INT_MAT-SPART.
BAPI_MARAX-MATL_GROUP = 'X'.
BAPI_MARAX-BASE_UOM = 'X'.
BAPI_MARAX-UNIT_OF_WT = 'X'.
BAPI_MARAX-DIVISION = 'X'.
Plant - Purchasing
BAPI_MARC1-PLANT = INT_MAT-WERKS.
BAPI_MARC1-PUR_GROUP = INT_MAT-EKGRP.
BAPI_MARC1-PUR_STATUS = INT_MAT-MMSTA.
BAPI_MARC1-ISSUE_UNIT = INT_MAT-AUSME.
BAPI_MARC1-QM_AUTHGRP = INT_MAT-QMATA.
BAPI_MARC1-GR_PR_TIME = INT_MAT-WEBAZ.
BAPI_MARC1-INSP_INT = INT_MAT-PRFRQ.
BAPI_MARC1-CTRL_KEY = INT_MAT-SSQSS.
BAPI_MARC1-CERT_TYPE = INT_MAT-QZGTP.
BAPI_MARC1-QM_RGMTS = INT_MAT-QSSYS.
BAPI_MARCX-PLANT = INT_MAT-WERKS.
BAPI_MARCX-PUR_GROUP = 'X'.
BAPI_MARCX-PUR_STATUS = 'X'.
BAPI_MARCX-ISSUE_UNIT = 'X'.
BAPI_MARCX-QM_AUTHGRP = 'X'.
BAPI_MARCX-GR_PR_TIME = 'X'.
BAPI_MARCX-INSP_INT = 'X'.
BAPI_MARCX-CTRL_KEY = 'X'.
BAPI_MARCX-CERT_TYPE = 'X'.
BAPI_MARCX-QM_RGMTS = 'X'.
Accounting 1
BAPI_MBEW1-VAL_AREA = INT_MAT-WERKS.
BAPI_MBEW1-PRICE_CTRL = INT_MAT-VPRSV.
BAPI_MBEW1-STD_PRICE = INT_MAT-STPRS.
BAPI_MBEW1-PRICE_UNIT = INT_MAT-PEINH.
BAPI_MBEW1-MOVING_PR = INT_MAT-VERPR.
BAPI_MBEW1-VAL_CLASS = INT_MAT-BKLAS.
BAPI_MBEW1-VAL_CAT = INT_MAT-BWTTY.
BAPI_MBEW1-ML_SETTLE = INT_MAT-MLAST.
BAPI_MBEW1-ML_ACTIVE = INT_MAT-MLMAA.
BAPI_MBEW1-VM_SO_STK = INT_MAT-EKLAS.
BAPI_MBEW1-VM_P_STOCK = INT_MAT-QKLAS.
BAPI_MBEW1-FUTURE_PR = INT_MAT-ZKPRS.
BAPI_MBEW1-VALID_FROM = INT_MAT-ZKDAT.
*ACCOUNTING 2
BAPI_MBEW1-TAXPRICE_1 = INT_MAT-BWPRS.
BAPI_MBEW1-TAXPRICE_2 = INT_MAT-BWPS1.
BAPI_MBEW1-TAXPRICE_3 = INT_MAT-VJBWS.
BAPI_MBEW1-DEVAL_IND = INT_MAT-ABWKZ.
BAPI_MBEW1-COMMPRICE1 = INT_MAT-BWPRH.
BAPI_MBEW1-COMMPRICE2 = INT_MAT-BWPH1.
BAPI_MBEW1-COMMPRICE3 = INT_MAT-VJBWH.
BAPI_MBEW1-LIFO_FIFO = INT_MAT-XLIFO.
BAPI_MBEW1-POOLNUMBER = INT_MAT-MYPOL.
BAPI_MBEWX-VAL_AREA = INT_MAT-WERKS.
BAPI_MBEWX-PRICE_CTRL = 'X'.
BAPI_MBEWX-STD_PRICE = 'X'.
BAPI_MBEWX-PRICE_UNIT = 'X'.
BAPI_MBEWX-MOVING_PR = 'X'.
BAPI_MBEWX-VAL_CLASS = 'X'.
BAPI_MBEWX-VAL_CAT = 'x'.
BAPI_MBEWX-ML_SETTLE = 'X'.
BAPI_MBEWX-ML_ACTIVE = 'X'.
BAPI_MBEWX-VM_SO_STK = 'X'.
BAPI_MBEWX-VM_P_STOCK = 'X'.
BAPI_MBEWX-FUTURE_PR = 'X'.
BAPI_MBEWX-VALID_FROM = 'X'.
BAPI_MBEWX-TAXPRICE_1 = 'X'.
BAPI_MBEWX-TAXPRICE_2 = 'X'.
BAPI_MBEWX-TAXPRICE_3 = 'X'.
BAPI_MBEWX-DEVAL_IND = 'X'.
BAPI_MBEWX-COMMPRICE1 = 'X'.
BAPI_MBEWX-COMMPRICE2 = 'X'.
BAPI_MBEWX-COMMPRICE3 = 'X'.
BAPI_MBEWX-LIFO_FIFO = 'X'.
BAPI_MBEWX-POOLNUMBER = 'X'.
*Storage Locations
BAPI_MARD1-PLANT = INT_MAT-WERKS.
BAPI_MARD1-STGE_LOC = INT_MAT-LGORT.
BAPI_MARDX-PLANT = INT_MAT-WERKS.
BAPI_MARDX-STGE_LOC = INT_MAT-LGORT.
WRITE:/ BAPI_HEAD-MATERIAL, BAPI_MARC1-PLANT ,BAPI_MARD1-STGE_LOC.
call function 'BAPI_MATERIAL_SAVEDATA'
exporting
HEADDATA = BAPI_HEAD
CLIENTDATA = BAPI_MARA1
CLIENTDATAX = BAPI_MARAX
PLANTDATA = BAPI_MARC1
PLANTDATAX = BAPI_MARCX
FORECASTPARAMETERS =
FORECASTPARAMETERSX =
PLANNINGDATA =
PLANNINGDATAX =
<b> STORAGELOCATIONDATA = BAPI_MARD1
STORAGELOCATIONDATAX = BAPI_MARDX</b>
VALUATIONDATA = BAPI_MBEW1
VALUATIONDATAX = BAPI_MBEWX
WAREHOUSENUMBERDATA =
WAREHOUSENUMBERDATAX =
SALESDATA = BAPI_MVKE1
SALESDATAX = BAPI_MVKEX
STORAGETYPEDATA =
STORAGETYPEDATAX =
IMPORTING
RETURN = BAPI_RETURN
TABLES
MATERIALDESCRIPTION = INT_MAKT
UNITSOFMEASURE =
UNITSOFMEASUREX =
INTERNATIONALARTNOS =
MATERIALLONGTEXT =
TAXCLASSIFICATIONS =
RETURNMESSAGES =
PRTDATA =
PRTDATAX =
EXTENSIONIN =
EXTENSIONINX =
IF BAPI_RETURN-TYPE = 'E'.
WRITE:/ 'Error Message ', BAPI_RETURN-MESSAGE.
ENDIF.
ENDLOOP.
ENDFORM.
<b>i am using this bapi method to copy materials from one plant to another plant using storage location so here what happenig is everyting is going correct but only the storage location data is not saving in table mard so any body faced this kind of problem please tell me.... and one more dbt
bapi_marcx-pur_status = 'x' what is 'X' here??? is that mandatory field or required field ???
points wil be rewarded.
reagrds,
sunil k airam.In the HEADDATA structure, STORAGE_VIEW should also be set as 'X' , in order to update storage location data
for example
BAPI_HEAD-STORAGE_VIEW = 'X'.
Also, PUR_STATUS corresponds to field MARA-MSTAE whose domain has value table T141, therefore values in the field are checked against T141
Edited by: Harris Veziris on May 12, 2008 12:37 PM -
How to transfer data from a dynamic internal table
Hi All
I want to transfer data from a dynamic internal table<dyn_table>
to a non dynamic internal table itab which should have the same structure as <dyn_table>.
How can this be done?
Regards,
Harshit RungtaAs stated earlier this can be done only through field symbols...
You cannot create an non dynamic internal table with ANY structure...using DATA statement
If the strucutre is defined well and good...you can create an non-dynamic internal table...
If you do not know the structure then the internal table has to be dynamic...and to be generated using field symbols
DATA: lv_ref TYPE REF TO data.
FIELD-SYMBOLS: <fs_dyn_table> TYPE STANDARD TABLE.
* You create a dynamic internal table...
CREATE DATA lv_ref LIKE (your_dynamic_internal_table).
ASSIGN lv_ref->* TO <fs_dyn_table>.
Now...do the transfer.
<fs_dyn_table> = "your_dynamic_internal_Table
Hope it helps! -
How to select data from a PL/SQL table
Hi,
I am selecting data from database after doing some screening i want to store it in a PL/SQL table (temporary area) and pass it to oracle reports.
Is there any way to select the data from a PL/SQL table as a cursor. Or is there any other way of holding the temporary data and then pass it back as a cursor.
Regards
KamalA PL/SQL "table" is anything but a table. Whoever came up with this term in PL/SQL to describe what is known as dynamic arrays (the correct programming terminology that existed since the 70's if not earlier and what is used in all other programming languages I'm familiar with)... well, several descriptions come to mind and none of them are complimentary.
You cannot "select" from a PL/SQL dynamic array as it is not a table within the Oracle context of tables.
Thus you need to convert (cast) a PL/SQL dynamic array into a temporary Oracle data set/table in order to select from it. This is in general a Bad Idea (tm). Oracle tables and SQL and concurrency controls and all that are especially designed for processing data. PL/SQL arrays is a very simplistic data structure with very limited usage. Why would you want to use that in SQL via a SELECT statement when you can use Oracle tables (or proper temp tables) instead? Besides that, it is also slow to cast a dynamic PL/SQL array into an Oracle SQL data set structure (context switching, copying of memory, etc).
The proper way to use PL/SQL to generate data sets for use via the SQL engine is pipelined table functions.
This is not to say that you should never use PL/SQL arrays and casting in SQL.. simply that you need to make sure that this is the correct and scalable way to do it. And that will also always be an exception to the rule when you do. -
Error saving data structure CE11000 (please read log) message number KX 655
while activating the data structure in the operating concern of CO PA sap gives the following errors.
1.Error saving data structure CE11000 (please read log)
Message no. KX655
2.Error saving table CE01000
Message no. KX593
3.in Log Reference field CE31000-REC_WAERS for CE31000-VVQ10001 has incorrect type.
Pls suggestHey,
Below tables are related to application logs
BAL_AMODAL : Application Log: INDX table for amodal communication
BALC : Application Log: Log or message context
BALDAT : Application Log: Log data
BALHANDLE : Application Log: Lock object dummy table
BALHDR : Application log: log header
BALHDRP : Application log: log parameter
BAL_INDX : Application Log: INDX tables
BALM : Application log: log message
BALMP : Application log: message parameter
BALOBJ : Application log: objects
BALOBJT : Application log: object texts
BALSUB : Application log: sub-objects
BALSUBT : Application log: Sub-object texts
-Kiran
*Please mark useful answers -
Issues while migrating data from a disk based table to a memory optimized table
Hi All,
I have a Disk based table with 400000000 rows in it, We are trying to convert it into a memory optimized table.
We have already created a memory optimized table with similar structure and trying to import data into this mem optimized table using 'insert into' from the disk table.
I am trying to Migrate around 10000000 rows at a time, but I am getting an error 'There is insufficient system memory in resource pool 'default' to run this query.' Altough we have 128 GB RAM on the server and SS is utilizing more than 120 GB RAM.
Altough the query has been cancelled.
Wanted to Know how could we migrate the table with the available RAM or do we have increase our RAM?
aaJosh,
Microsoft's documentation on this subject isn't at its best right now (I believe there will be incremental improvements for better understanding), but here is what I read so far.
http://msdn.microsoft.com/en-us/library/dn133190.aspx
"A hash index consists of a collection of buckets organized in an array. A hash function maps
index keys to corresponding buckets in the hash index."
Judging by this statement, a hash index is a hash table just like the ones used as work tables for hash operators in queries (hash matching or grouping). Doesn't contain (or include) other columns, i. e. it doesnt store any data.
"Multiple index keys may be mapped to the same hash bucket."
This means there is some kind of mapping, but this is not explained in the article above. However...
http://msdn.microsoft.com/en-us/library/dn282389.aspx
"For each hash index in the table, each row has an 8-byte address pointer to the next row in the index. Since there are 4 indexes, each row will allocate 32 bytes for index pointers (an 8 byte pointer for each index)."
Each row (in the table) has a pointer (for every index, 1:1 ratio) that points to a row (also known as bucket) in the hash index. So that is how the aforementioned mapping works huh!
> What happens if you include a column in two or three different indexes, or is that not allowed?
My conclusion is that the hash indexes works the same way as a hash worktable, with the addition of the column in the base table that is added to store pointers to the hash index.
When you create a new index, even if you use the same column twice, a hash table is created, hash calculations are distinctly made for each key and stored on it, and while this is done, the column that is exclusively used for this new index is populated
with pointers to this index. You can add a given column to the set of keys of different hash indexes as many times as you want. Correct if i'm wrong, I'm also new on this subject :D -
What is the best data structure for loading an enterprise Power BI site?
Hi folks, I'd sure appreciate some help here!
I'm a kinda old-fashioned gal and a bit of a traditionalist, building enterprise data warehouses out of Analysis Service hypercubes with a whole raft of MDX for analytics. Those puppies would sit up and beg when you asked them to deliver up goodies
to SSRS or PowerView.
But Power BI is a whole new game for me.
Should I be exposing each dimension and fact table in the relational data warehouse as a single Odata feed?
Should I be running Data Management Gateway and exposing each table in my RDW individually?
Should I be flattening my stars and snowflakes and creating a very wide First Normal Form dataset with everything relating to each fact?
I guess my real question, folks, is what's the optimum way of exposing data to the Power BI cloud?
And my subsidiary question is this: am I right in saying that all the data management, validation, cleansing, and regular ETTL processes are still required
before the data is suitable to expose to Power BI?
Or, to put it another way, is it not the case that you need to have a clean and properly structured data warehouse
before the data is ready to be massaged and presented by Power BI?
I'd sure value your thoughts and opinions,
Cheers, Donna
Donna KellyDear All,
My original question was:
what's the optimum way of exposing data to the Power BI cloud?
Having spent the last month faffing about with Power BI – and reading about many people’s experiences using it – I think I can offer a few preliminary conclusions.
Before I do that, though, let me summarise a few points:
Melissa said “My initial thoughts: I would expose each dim & fact as a separate OData feed” and went on to say “one of the hardest things . . . is
the data modeling piece . . . I think we should try to expose the data in a way that'll help usability . . . which wouldn't be a wide, flat table ”.
Greg said “data modeling is not a good thing to expose end users to . . . we've had better luck with is building out the data model, and teaching the users
how to combine pre-built elements”
I had commented “. . . end users and data modelling don't mix . . . self-service so
far has been mostly a bust”.
Here at Redwing, we give out a short White Paper on Business Intelligence Reporting. It goes to clients and anyone else who wants one. The heart
of the Paper is the Reporting Pyramid, which states: Business intelligence is all about the creation and delivery of actionable intelligence to the right audience at the right time
For most of the audience, that means Corporate BI: pre-built reports delivered on a schedule.
For most of the remaining audience, that means parameterised, drillable, and sliceable reporting available via the web, running the gamut from the dashboard to the details, available on
demand.
For the relatively few business analysts, that means the ability for business users to create their own semi-customised visual reports when required, to serve
their audiences.
For the very few high-power users, that means the ability to interrogate the data warehouse directly, extract the required data, and construct data mining models, spreadsheets and other
intricate analyses as needed.
On the subject of self-service, the Redwing view says: Although many vendors want tot sell self-service reporting tools to the enterprise, the facts of the matter are these:
v
80%+ of all enterprise reporting requirement is satisfied by corporate BI . . . if it’s done right.
v Very few staff members have the time, skills, or inclination to learn and employ self-service business intelligence in the course of their activities.
I cannot just expose raw data and tell everyone to get on with it. That way lies madness!
I think that clean and well-structured data is a prerequisite for delivering business intelligence.
Assuming that data is properly integrated, historically accurate and non-volatile as well, then I've just described
a data warehouse, which is the physical expression of the dimensional model.
Therefore, exposing the presentation layer of the data warehouse is – in my opinion – the appropriate interface for self-service business intelligence.
Of course, we can choose to expose perspectives as well, which is functionally identical to building and exposing subject data marts.
That way, all calculations, KPIs, definitions, and even field names, and all consistent because they all come from the single source of the truth, and not from spreadmart hell.
So my conclusion is that exposing the presentation layer of the properly modelled data warehouse is – in general - the way to expose data for self-service.
That’s fine for the general case, but what about Power BI? Well, it’s important to distinguish between new capabilities in Excel, and the ones in Office 365.
I think that to all intents and purposes, we’re talking about exposing data through the Data Management Gateway and reading it via Power Query.
The question boils down to what data structures should go down that pipe.
According to
Create a Data Source and Enable OData Feed in Power BI Admin Center, the possibilities are tables and views. I guess I could have repeating data in there, so it could be a flattened structure of the kind Melissa doesn’t like (and neither do I).
I could expose all the dims and all the facts . . . but that would mean essentially re-building the DW in the PowerPivot DM, and that would be just plain stoopid. I mean, not a toy system, but a real one with scores of facts and maybe hundreds of dimensions?
Fact is, I cannot for the life of me see what advantages DMG/PQ
has over just telling corporate users to go directly to the Cube Perspective they want, that has already all the right calcs, KPIs, security, analytics, field names . . . and most importantly, is already modelled correctly!
If I’m a real Power User, then I can use PQ on my desktop to pull mashup data from the world, along with all my on-prem data through my exposed Cube presentation layer, and PowerPivot the
heck out of that to produce all the reporting I’d ever want. It'd be a zillion times faster reading the data directly from the Cube instead of via the DMG, as well (I think Power BI performance sucks, actually).
Of course, your enterprise might not
have a DW, just a heterogeneous mass of dirty unstructured data. If that’s the case,
choosing Power BI data structures is the least of your problems! :-)
Cheers, Donna
Donna Kelly
Maybe you are looking for
-
I am trying to hook up my daughters new ipod nano but when I plug it into my imac it says I need the latest version of itunes but it says my imac version 10.5.8 is not compatable (late enough)version...please help as to what I need to do?
-
Adobe ESP 5210 incompatible with OSx 10.6.6
I have a new printer Kodak ESP 5210. It prints text files, Jpegs etc. perfectly. I have Adobe Web Suite CS4 installed. I try to print PDFs, .PSD files and even .PDF files saved from Pages but I get a driver failed error. I have the latest Kodak drive
-
Camera to ipod solution??
ok well i have a ipod video 30gb and a casio ex-s600 and a 512 mb sd card. the problem is that my camera doesnt have a mini usb port, and the only way to connect the camera to the ipod is to put it in my camera's cradle and then connect it to the ipo
-
Everything has disappeared in my iPhoto 4 I have a good backup which tells me that iPhoto was "modified" a few days ago. before I realised this I asked it to export my photos to iPhoto, which it has been trying to do ever since to no avail, Dor days
-
Hi Every One My problem is how can i manage barcode reader in java. thanks