How the data is coming to table /scwm/lagps in EWM
Hi All,
Does anybody know how[/from which tcode in customizing] data is coming in to table /scwm/lagps.
I have the same problem and solved
thanks
Edited by: jun hu on Jun 17, 2011 7:44 AM
Similar Messages
-
How the data populated into tables like USR01,USR02 etc
Hi,
I have one theoritical doubt. How the data is populated into tables like USR01, USR02 etc after creating the
user using SU01. Let me know the process behind it.
Rgds,
Chandra.Hi Chinna,
When you create users using SU01, SU10 transaction codes, it uses BAPI_USER_CREATE1 which will update the data in the respective tables.
Same way BAPI_USER_CHANGE is used when you modify any existing users.
Hope this answers!!
Warm Regards,
Raghu -
From which tables the data is coming in to transaction fields, how to find?
Hi abapers,
I am new to abap,
I have some data in one transaction.
I want to know that one particular record in a field from which table the data is coming, how do i know?
How to find that for the transaction is haveing header and item tables?
How to find that what are tables used for that transaction?
by presseing f1 on a field we can find table name or structure name along with field name of that particular field?
If structrure means how to find that field from which table it is coming?
I found table name for some of the fields like this by pressing f1,double clicking on structure name?
but some fields in another transaction does not showing the table names only structure it is showing?
Moderator Message: Basic Question. Please search.
Edited by: kishan P on Nov 13, 2010 3:20 PMhi ,
you can do that by Transation st05
where you have to 1) activate trace ,
2) execute transaction
3) deactivate trace after transaction complete
4) display trace
there you can find all step by step flow from where data has been retirved and tables also .
if you dont no take help from Basis People .
Regards
Deepak. -
How to get the data from a cluster table to BW
Dear All,
I want to extract the data from R/3 to BW by using 2 tables and one Cluster B2.
Actually my report contains some fields from PA2001, PA2002 and one cluster table B2 (Table ZES). Can I create View by using these 3 tables? If it is not possible how can I get the data from the cluster? Can I create generic datasource by using cluster tables directly?
In SE11 Transaction the Cluster (table ZES) is showing invalid table.
I referred some Forums, but no use.
Can any body tell me procedure to get the data from a cluster (table ZES) ?
Waiting for you results.
Thanks and regards
RajeshHI Siggi,
Thank you for your reply..
I am also planning to do FM to get the data. But it is saying that the Cluster table ZES does not exist (ZES is the the standard table, in SE11 also).
How can I use the Fields from the that table.?
What can I do now, can you please explain me about this point.
Waiting for your reply.
Thanks and Regards
Rajesh
Message was edited by:
rajesh -
How to pass the data from a input table to RFC data service?
Hi,
I am doing a prototype with VC, I'm wondering how VC pass the data from a table view to a backend data service? For example, I have one RFC in the backend system with a tabel type importing parameter, now I want to pass all the data from an input table view to the RFC, I guess it's possible but I don't know how to do it.
I try to create some events between the input table and data service, but seems there is no a system event can export the whole table to the backend data service.
Thanks for your answer.Thanks for your answer, I tried the solution 2, I create "Submit" button, and ser the mapping scope to be "All data rows", it only works when I select at least one row, otherwise the data would not be passed.
Another question is I have serveral imported table parameter, for each table I have one "submit" event, I want these tables to be submitted at the same time, but if I click the submit button in one table toolbar, I can only submit the table data which has a submit button clicked, for other tables, the data is not passed, how can I achieve it?
Thanks. -
How can we delete the data in e-fact table.
how can we delete the data in e-fact table.
hii,
You cannot delete the request individually but you can one of the following:
1. Do a selective deletion from the cube. RSA1 -> Cube -> Contents -> selective deletion.
2. Delete all the data in the cube and then reconstruct only the required request ids. This would work only if you have the PSA available for all the requests.
3. Reverse posting is another possibility.
hope it helps,
partha -
How to recover the data from a dropped table in production/archive mode
How to recover the data/change on a table that was dropped by accident.
The database is on archive mode.Oracle Version. ? If 10g.
Try this Way
SQL> create table taj as select * from all_objects where rownum <= 100;
Table created.
SQL> drop table taj ;
Table dropped.
SQL> show recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
TAJ BIN$b3MmS7kYS9ClMvKm0bu8Vw==$0 TABLE 2006-09-10:16:02:58
SQL> flashback table taj to before drop;
Flashback complete.
SQL> show recyclebin;
SQL> desc taj;
Name Null? Type
OWNER VARCHAR2(30)
OBJECT_NAME VARCHAR2(30)
SUBOBJECT_NAME VARCHAR2(30)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(19)
CREATED DATE
LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
SQL>M.S.Taj -
How to fill the data of two different tables into one?
Hi Experts,
I have two tables named CDHDR and CDSHW(structure). I have extracted the data from these two tables through two function modules named CHANGEDDOCUMENT_HEADER and CHANGEDOCUMENT_POSITION. Now I have the data in to different tables.
These two tables neither has relationship with each other through any field nor have any field which exist in both. Can anyone tell me in this case what should be the process to take the data of both the tables into one table. How can I match the record of one table to another?
thanks a ton in advance.
Edited by: Moni Bindal on Apr 28, 2008 4:16 PM
Edited by: Alvaro Tejada Galindo on Apr 28, 2008 1:42 PMHye Bindal,
without a relation, it is not possible to club the data of 2 internal tables. More over it depends on the requirement as to y u should club to non related quantities in an internal table.
if you wish to do so, one thing is it has internal table which includes the strucute of the 2.
data: begin of ty_out,
first type first_structure,
second type second_structure,
end of ty_out.
data: itab type standard table of ty_out.
data: wa type ty_out.
loop into it1 into wa1.
move corresponding wa to wa1.
append wa to itab.
endloop.
loop into it2 into wa2.
move corresponding wa to wa2.
append wa to itab.
endloop.
now the internal table itab will have all the contents of it1 and it2.
<REMOVED BY MODERATOR>
Thanks,
Imran.
Edited by: Alvaro Tejada Galindo on Apr 28, 2008 1:43 PM -
How the data is entered in the customized table
Hi,
In implemenation scenario when we create generic extraction , how the data is entered
in the customized table if it is huge data ( around 5000 records)
Regards,
VivekHi Vivek,
Follow bellow steps:
1.Goto RSO2.
Choose Datasource from bellow of Three
a). Transaction Data
b). Master data Attributes
c). Master data Text
2.Specify Application component(SD/MM..)
3.There are three extraction methods to fill datasource.
4.Select extraction method extracts the data from a transparent table or database view.
5.Select Extraction from View, then we have to create the View.
a).Specify the view name.
b).Choose the view type (Database view) from bellow mentioned views.
i). Database view.
ii). Projection view.
iii).Maintainance view.
iv). Help view.
6. Specify Tables and Join Conditions and define view fields.
7. Assign View to Datasource
8. Once you specify view in Data source, the extract structure will generate.
9. you can check the data in RSA3.
Regards,
Suman -
How is the data inserted into CST_INV_QTY_TEMP table?
Hi All,
How is the data inserted into CST_INV_QTY_TEMP table ?
Thanks in advance,
Mayur
Edited by: 928178 on 17-Apr-2012 04:29How is the data inserted into CST_INV_QTY_TEMP table ?TABLE: BOM.CST_INV_QTY_TEMP
http://etrm.oracle.com/pls/et1211d9/etrm_pnav.show_object?c_name=CST_INV_QTY_TEMP&c_owner=BOM&c_type=TABLE
Thanks,
Hussein -
How to check the data of an archived table.
I have archived a table created by me. I have executed the write program for the archiving object in SARA. Now how can check the data of my archived table.
Hello Vinod,
One thing to check in the customizing settings is your "Place File in Storage System" option. If you have selected the option to Store before deleting, the archive file will not be available for selection within the delete job until the store job has completed successfully.
As for where your archive file will be stored - there are a number of things to check. The archive write job will place the archive file in whatever filesystem you have set up within the /nFILE transaction. There is a logical file path (for example ARCHIVE_GLOBAL_PATH)where you "assign" the physical path (for example UNIX: /sapmnt/<SYSID>/archivefiles). The logical path is associated with a logical file name (for example ARCHIVE_DATA_FILE_WITH_ARCHIVE_LINK). This is the file name that is used within the customizing settings of the archive object.
Then, the file will be stored using the content repository you defined within the customizing settings as well. Depending on what you are using to store your files (IXOS, IBM Commonstore, SAP Content Server, that is where the file will be stored.
Hope this helps.
Regards,
Karin Tillotson -
Stage tab delimited CSV file and load the data into a different table
Hi,
I pretty new to writing PL/SQL packages.
We are using Application express for our development. We get CSV files which is stored as a BLOB content in a table. I need to write a trigger that would get executed once the user the uploads the file and parse thru the Blob content and upload or stage the data in a different table.
I would like to see if there is any tutorial or article that could explain the above process with the example or sample code to do the same. Any help in this regard will be highly appreciated.Hi,
This is slightly unusual but at the same time easy to solve. You can read through a blob using the dbms_lob package, which is one of the Oracle supplied packages. This is presumably the bit you are missing, as once you know how you read a lob the rest is programming 101.
Alternatively, you could write the lob out to a file on the server using another built in package called utl_file. This file can be parsed using an appropriately defined external table. External tables are the easiest way of reading data from flat files, including csv.
I say unusual because why are you loading a csv file into a blob? A clob is almost understandable but if you can load into a column in a table why not skip this bit and just load the data as it comes in straight into the right table?
All of what I have described is documented functionality, assuming you are on 9i or greater. But you didn't provide a version so I can't provide a link to the documentation ;)
HTH
Chris -
Hi Experts,
How the data is stored in Info cube and DSO...in the back end what will happen???
I mean Cube contain Fact table and Dimension tables How the data will store and what will happen in the backend???
Regards,
Swetha.Hi,
Please check :
How is data stored in DSO and Infocube
InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions.
An InfoCube consists of several InfoObjects and is structured according to the star schema. This means there is a (large) fact table that contains the key figures for the InfoCube, as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions.
An InfoCube fact table only contains key figures, in contrast to a DataStore object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions.
The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs) which are contained in the key part of the particular database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube.
Characteristics that logically belong together (for example, district and area belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance. This InfoCube structure is optimized for data analysis.
The fact table and dimension tables are both relational database tables.
Characteristics refer to the master data with their attributes and text descriptions. All InfoObjects (characteristics with their master data as well as key figures) are available for all InfoCubes, unlike dimensions, which represent the specific organizational form of characteristics in one InfoCube.
http://help.sap.com/saphelp_nw04s/helpdata/en/4c/89dc37c7f2d67ae10000009b38f889/frameset.htm
Check the threads below:
Re: about Star Schema
Differences between Star Schema and extended Star Schem
What is the difference between Fact tables F & E?
Invalid characters erros
-Vikram -
How the data is fetched from the cube for reporting - with and without BIA
hi all,
I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
Basic InfoProvider *****Table type ***** Viewed at ***** Records, Selected *****Records, Transported
Cube A ***** blank ***** 0.624305 ***** 8,087,502 ***** 2,011
Cube B ***** E ***** 42.002653 ***** 1,669,126 ***** 6
Cube B ***** F ***** 98.696442 ***** 2,426,006 ***** 6
CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
Basic InfoProvider *****Table Type *****Viewed at *****Records, Selected *****Records, Transported
Cube B *****E *****46.620825 ****1,669,126**** 6
Cube B *****F ****106.148337**** 2,426,030***** 6
Cube A *****E *****61.939073 *****3,794,113 *****3,499
Cube A *****F ****90.721171**** 4,293,420 *****5,584
now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
Can someone pls clarify on this difference in records being selected.Hi,
yes, Vitaliy could be guess right. Please check if FEMS compression is enabled (note 1308274).
What you can do to get more details about the selection is to activate the execurtion plan SQL/BWA queries in data manager. You can also activate the trace functions for BWA in RSRT. So you need to know how both queries select its data.
Regards,
Jens -
How the data is fetched from the cube for reporting
hi all,
I need to understand the below scenario:(as to how the data is fetched from the cube for reporting)
I have a query, on a multiprovider connected to cubes say A and B. A is on BIA index, B is not. There are no aggregates created on both the cubes.
CASE 1: I have taken RSRT stats with BIA on, in aggregation layer it says
Basic InfoProvider *****Table type ***** Viewed at ***** Records, Selected *****Records, Transported
Cube A ***** blank ***** 0.624305 ***** 8,087,502 ***** 2,011
Cube B ***** E ***** 42.002653 ***** 1,669,126 ***** 6
Cube B ***** F ***** 98.696442 ***** 2,426,006 ***** 6
CASE 2:I have taken the RSRT stats, disabling the BIA index, in aggregation layer it says:
Basic InfoProvider *****Table Type *****Viewed at *****Records, Selected *****Records, Transported
Cube B *****E *****46.620825 ****1,669,126**** 6
Cube B *****F ****106.148337**** 2,426,030***** 6
Cube A *****E *****61.939073 *****3,794,113 *****3,499
Cube A *****F ****90.721171**** 4,293,420 *****5,584
now my question is why is here a huge difference in the number of records transported for cube A when compared to case 1. The input criteria for both the cases are the same and the result output is matching. There is no change in the number of records selected for cube A in both cases.It is 8,087,502 in both cases.
Can someone pls clarify on this difference in records being selected.Hi Jay,
Thanks for sharing your analysis.
The only reason I could think logically is BWA is having information in both E and F tables in one place and hence after selecting the records, it is able to aggregate and transport the reords to OLAP.
In the second case, since E and F tables are separate, aggregation might be happening at OLAP and hence you see more number of records.
Our Experts in BWA forum might be able to answer in a better way, if you post this question over there.
Thanks,
Krishnan
Maybe you are looking for
-
Using bapi in lsmw posting g/l account
hi experts, i am trying to post a document in lsmw using bapi method. i am using business object: BUS6030 (Accounting GL posting). method : POST. when trying to mapping the line items i coudnt find the line item NEWBS(po
-
Can I run Windows 8.1 using VirtualBox on my Macbook?
The system requirements for Windows 8.1 says that it needs "Microsoft DirectX 9 graphics device with WDDM driver". Can I run Windows 8.1 using VirtualBox on my Macbook? It's a late 2009 model with an NVIDIA GeForce 9400M 256 MB graphics card & 2 GB
-
Finder not working, responding.
I set up another user account on my server (in system prefs, not a directory user account) and when I try to use it (or the built in guest account) the Finder is non-responsive. I can launch Safari and other apps, but the Finder won't show windows o
-
Use oracle http server to configure mod_plsql
Hi, I have a question. I have already installed HTTP server using the Oracle Database10g Companion CD on my Windows 2000 SP4. At the end of the installation, it refers to http://<computer_name>:7777 to open the HTTP server page. It seems work fine th
-
Production Environment Definition
Hello again, We are just starting to look at what it will take to setup a production Forte environment. I have some general questions regarding considerations that may affect the environment definition and thought maybe some of the more experienced u